1. NSX-T Firewalls
Before we get started, I want to make sure that everybody understands the concept of a stateful firewall. So let’s break it down in a little diagram here. At the bottom left, you can see I’ve got a virtual machine, VM 1, that’s connected to a Layer 2 segmen, and it’s running on one of my transport nodes, which is an ESXi host. The layer-two segment here could be connected to either a tier-zero or tier-one distributed router, depending on whether we’re using single- or multi-tier routing. And then over here, we’ve got a service router running on the edge, and we’ve got a physical router in my external network. So now let’s say that virtual machine one generates some sort of request, maybe to view a website or something like that, and that request flows through the layer two segment. It hits the tier-one or tier-zero distributed router and gets passed along to the service router at the edge.
Now there may be a firewall implemented here, so at this point the firewall rule set is going to be checked, and let’s just assume that the firewall is allowing this traffic through. So now my stateful firewall says, “Okay, this traffic is allowed out,” and it flows out and eventually hits the destination. Now we’ve got some kind of reply coming back in. That reply is going to be dynamically allowed through the edge. So the edge is a smart stateful firewall that keeps track of the status of ongoing connections. So when the traffic was allowed outbound, the edge basically said, “Okay, I’ve got this connection that is now established.” If any response traffic comes, I don’t necessarily need a rule to allow that traffic in. As long as that response traffic matches a known connection, it’ll be allowed in. And this is really an ideal design here because what I would like to do at the edge is have really restrictive rules for what is allowed in. I don’t want a bunch of traffic allowed in. I want very strict rules about what is allowed in. But if something on the inside of my network establishes a connection, well, then I definitely want to permit that response traffic to come back.
So a stateful firewall really simplifies your rule set because you can have really permissive rules outbound and really restrictive rules inbound, and it’ll still allow all of those responses to get through. And if you ever hear the term “stateless firewall,” we’re not going to be looking at any of those here. A stateless firewall does not understand the state of connections, so you have to have all rules configured in both directions. Okay, so now that we understand what a stateful firewall is, let’s look at some examples of different ways to implement firewalls. And we want to look at some of the older ways so that we can understand the benefits of using a distributed firewall with NSX. So here you can see in this diagram that I’ve got two virtual machines running on the same physical host, right? VM one and VM two And let’s assume that VM One is part of my application tier and VM Two is part of my web tier. As a result, I need to put some controls in place between virtual machines in different tiers. Therefore, I need firewall rules between them.
And so here’s my physical firewall down here in my physical network. And what I’m going to need to do in order to implement firewall rules between these two VMs is if VM One sends some traffic to VM Two, it’s going to have to flow through a VM nick, hit the physical network, get switched, arrive at the firewall, get examined by the firewall’s rule set, and if it makes it through, it’s passed back to the physical switch through the VM nick, and it eventually arrives at the virtual machine too. So using a physical firewall involves multiple wire hops. That’s what we call it whenever a packet needs to flow over some physical wire in my network. So this packet is being routed through whatever virtual switching I have configured here via a physical adapter. One wire hop to the switch; one wire hop from the switch to the firewall. That’s assuming there’s not a router in between. Then one wire hop back to the physical switch, one wire hop back to the VMnick, and finally to the VM. This is also something that we call “hair pinning.” Or, basically, the traffic is being pushed out to the physical network unnecessarily. So instead of doing that, maybe I could implement a firewall at the edge. Well, let’s think about why that design is not ideal for this scenario. This is an example of east-west traffic. So I’ve got VM one and VM two.
They’re on different networks, but they’re on the same physical host. And let’s just assume that they are connected to layer-two segments in my NSX domain. Well, if I implement my firewall at the edge, what’s going to happen is something very similar to what we saw with a physical firewall. The edge is going to run on a defined set of ESXi hosts. And oftentimes those are not our transport nodes. Those are separate hosts that are dedicated to the edge. So now, if VM one wants to send some traffic to VM two again, it’s going to have to flow over a physical network, get encapsulated, and hit this NSX edge firewall. It’s going to have to pass through the rules of that firewall, and then it’s going to get encapsulated again and sent over the physical underlay network before it can arrive at the destination virtual machine. So, similar issue here with the wire hops directly from VM one: it hits a VM nick, one wire hop to the switch, and one wire hop to the other VM nick. It’s the firewall. Another wire hop, another wire hop, and then eventually to VM 2. So again, we’re unnecessarily pushing traffic over the physical network here when we could be doing something very similar to what we learned about before.
With distributed routing, we can create a distributed firewall. And with a distributed firewall, the traffic is simply flowing out of the virtual machine, and the distributed firewall is running as a kernel module within that ESXi host. So at that point, the traffic can be analysed as it leaves the virtual nick of the virtual machine. And so I can instantiate firewall rules wherever I want. I could have virtual machines connected to the same Layer 2 segment, and I could have firewall rules governing how that traffic can flow between VMs on the same segment. In this case, the VMs are on two different segments, but they’re on the same physical host. And so we’re not pushing that traffic over any kind of physical network. The firewall rules are instantiated in the kernel module on that ESXi host. Okay, so let’s walk through one more diagram here. This one’s a little bit more complex. Here, you can see I have two virtual machines in my diagram. VM 1 is connected to a layer 2 segment, VNI 5001. VM two is connected to a different layer 2 segment, and both of these VMs are running on different transport nodes. So VM one is running on ESXi 1, and VM two is running on ESXi 2. Of course, I’m using the distributed firewall here.
Now I’m leaving a couple of things out of this diagram just to kind of keep it simple. I’m not putting in my distributed routers; I’m not putting in any of that kind of stuff. I’m just going to keep it to the distributed firewall and the firewalling components here. All right, so let’s say that VM One wants to send some packets to VM Two, wants to ping it, or whatever. So VM One generates a ping destined for VM Two, and VM Two is on a different network than VM One. So as soon as the packet leaves virtual machine VM1, it is inspected by the distributed firewall. Remember, the distributed firewall is instantiated at the virtual NIC level. So it’s basically the equivalent of plugging my virtual machine right into a firewall. And so the first thing that happens as that traffic leaves VM 1 is that it is analysed by the distributed firewall. And if the traffic is allowed out, then it will flow to our distributed router. And so this rule enforcement is happening before it even hits the router. So now let’s say, “Okay, it hits the router.” The distributed router determines that this traffic is destined for VNI 5002. So at that point, the distributed router is going to take that traffic and dump it onto this layer-two segment via 5002. But that’s not where the destination virtual machine is.
This traffic needs to be encapsulated and passed over the physical underlay network. And that’s another important thing to understand: the encapsulation process does not interfere with firewall rule enforcement. So the firewall rules were enforced here before the traffic got encapsulated. So now, at this point, the firewall rules have been enforced and the distributed router has routed it to the appropriate layer to segment. It’s going to get encapsulated, it’s going to flow over the physical underlay, and it’s going to get decapsulated by this receiving tap over here. The receiving tap is going to dump it onto the appropriate layer 2 segment, and then it is going to try to get the VM 2. But before it can get into VM 2, it has to flow through the distributed firewall and be analysed by the inbound rules that are applied to VM 2. So that’s the last step before this traffic arrives at VM 2. It has to get analysed by the distributed firewall once more, and it’s compared to the inbound rules of VM 2. And if the traffic is allowed in, it reaches the destination. So that’s kind of a wider look at the distributed firewall and how it operates as an east-west firewall within the NSX domain. So we want to think of the distributed firewall as our east-west firewall, and the edge, what’s going on in the service router, as our north-south firewall that’s being applied to traffic as it leaves or enters our NSX domain.
2. Demo: Configure the NSX-T Distributed Firewall
In this video, I’ll demonstrate how to configure the distributed firewall in Nsxt 3. And as you can see here, I’m once again using the free labs available at Hol vmware.com.So here I am, already logged into the Nsxtuser interface, and I’m going to browse to security. And within security, we’ve got a couple different sections that we can configure. First off, we see east-west security, and under east-west security, we see the distributed firewall. These are the firewall rules that are applied to east-west traffic within my NSX domain, whereas my north-south security is applied at the edge of my tier zero or tier one northbound interfaces. So let’s start by looking at east-west security. We’re going to go over here to the distributed firewall. And within the firewall, we’re in a lab environment here. As you can see, there are already a number of policies in place.
I’m just going to start by clicking on “Add policy.” And here we can see a new policy that I’m going to create. So I’m going to start by naming this new policy Demo DFW. Now, just a couple of things to note here: Notice that there’s a publish button here at the top right corner. So the configuration changes that I’m about to make are not going to be immediately applied. They’re not going to be applied until I hit the publish button. So at least initially, this new firewall policy that I’m creating is having no effect whatsoever. And you can see here that there’s one total unpublished change. I can go ahead and publish those changes, but if I’ve made changes that I don’t want to publish and just want to get rid of, I can click on a revert to get rid of them. And this is here to protect us. We don’t want to be working behind a firewall. We want to make the necessary changes, verify them, and then, once we’re done, publish all of the changes simultaneously. Another thing I want to point out is that at the very bottom here, we’ve got something called the default layer. And notice that we’re going to allow traffic from any source, any destination, and any services applied to the distributed firewall.
So the blanket statement at the end of my distributed firewall is to allow everything. And so I may want to potentially change this to drop as the default action, but I’m not going to do that until I’m relatively confident that I’ve perfected my distributed firewall configuration to allow the appropriate traffic through. Okay, so on our new firewall policy, I’m just going to click this gear icon on the far right, and you’ll notice there are a few settings that we can choose from here. TCP is strict, stateful, and locked. And so the first setting here, TCP strict, determines whether or not the firewall needs to see a successful three-way handshake when a TCP session is established. And there are some common exploits for denial-of-service attacks in which incomplete handshakes are initiated and half-open connections are established, potentially bringing our application down. So if we want to enforce this strict three-way handshake for TCP sessions, we can do that here. Statefulness determines whether or not we want this particular policy to be tasteful.
So should return traffic be dynamically allowed through, like a stateful firewall? We can choose whether or not we want to enable it on a specific policy here. Finally, do I want to secure this policy? Do I want to prevent other users from modifying this section? I’m going to go ahead and leave that unselected. So I’m going to leave all the default settings here and hit Apply. So now I’m just going to go ahead and click on “Publish.” I’m fine with all of the changes I have made thus far. So I haven’t added any rules to this policy yet. But I’ve gone ahead and published this policy at this point. You can see here that the publishing process is in progress. And there we go. Now it’s successful. So now let’s start adding some rules. So I’ll select my new policy here. I’ll click on “Add Rule,” and I’m just going to name my rule “Demo Rule Web.” So this is going to be a rule for my webservers, and I’m going to allow traffic from any source. but maybe for destinations. I’d like to highlight a specific group of virtual machines here: my three tier web servers. And this is going to include certain virtual machines that are web servers.
Okay? So that’s what I’m going to do here. I’m going to pick a group of servers that I want to apply this firewall to as the destination. As a result, any traffic destined for these web servers from anywhere will be subject to this firewall rule. And so what type of traffic do I want to control here? I could pick some specific services, and as you can see here, there are all sorts of services that are already prebuilt here. So I can just select an existing service. For example, if my web servers need to be able to handle HTTP and HTTPS traffic, I can choose those services. And that makes my firewall configuration much simpler if I can just grab existing services and incorporate them into my firewall rules. Now I’m not going to deal with any profiles yet. These are things that are occurring at the application layer, application layer contexts like, for example, antivirus, and things along those lines. So I’m going to cancel that, and I’m not going to apply for a profile. And then, under “apply to,” I could focus this firewall configuration on a specific group of virtual machines. So if I only want to implement these rules and this policy on a certain group of VMs, I can do that. But now I’m going to apply it to my entire firewall.
So the scope of this policy is the entire distributed firewall. Now, if I want to improve my performance and the efficiency with which my firewall operates, I can do so by modifying the applied field. And by the way, I could do that relatively easily here because this rule is only being applied to my web servers. I could choose to limit the scope of what this rule actually needs to be pushed down to. And then I’m just going to go ahead and cancel this though. I’m going to leave it as is. and then I can choose my action. I can allow, drop, or reject. Allow is a fairly self-explanatory word. Drop simply means to remove the traffic. Reject also drops the traffic, but it also sends an ICMP response to the sender indicating that that traffic has been administratively prohibited. So the reject action can actually make it a lot easier for me to troubleshoot and determine if traffic is being blocked by a firewall rule. So that’s why you may choose to use “reject” instead of “deny.” And then, last but not least, do I want to enable or disable this rule? I can choose whether or not to enable it there. So at the moment I have established a configuration, but that configuration is not actually applied.
This is not going to be applied until I hit the publish button up here. And now my firewall rule changes are being pushed live. One other side note I want to mention is that you’ll notice there’s a little time window up here on the policy. This is a cool new feature of Nsxt 3. I can specify the days of the week and the hours during which this policy is actually applied. So if this is a policy that only makes sense during the business day or at night, I can control when it is actually applied. I’m going to cancel that. I’m not going to set any time rules for this policy. So now I’ve gone ahead and applied this rule set. Let’s go back to the security overview screen here, and if we scroll down a little bit, we can see the distributed firewall rule utilization. We can see the total number of distributed firewall rules, how many identity firewall rules, and how many compute firewall rules. Remember, this is a hands-on lab environment. A lot of this configuration was already established here before I even logged in.
So let’s go back to the distributed firewall and take a look at some of the different categories of rules that we could potentially apply. There are Ethernet rules, and by the way, these are applied in order, right? So the Ethernet rules are the ones that are applied first. We can see there’s a default layer for policy here that’s already set up automatically in this lab environment. But yeah, we could add rules to this policy as well. If I want to create new rules here, I could do that. I can establish certain sources and destinations for these new rules and establish new rules in this section. And the rules in this section are applied before any of those in the other sections. So let’s look at the next set of rules: emergency rules. And so basically, with the emergency rule, we’ve got some kind of emergency—we’ve got a quarantine that we need to uphold or something like that—that we need to add a policy for. These are emergency rules. Maybe we’ve got a virus that’s spreading and we want to go ahead and establish emergency rules. And remember, these rules are organised by these categories, and each category has an evaluation precedence. So the Ethernet rules are evaluated first, then the emergency rules, and then the infrastructure rules. These infrastructure rules are going to define access to shared services like Active Directory, DNS, NTP, backup management servers, and so on and so forth.
And so we can see some of the rules that are pre-built here in the hands-on lab demonstrations, things like active directory servers and stuff like that. That is the infrastructure in charge of the environment. Rules are rules between security zones. So I might have a production environment, a development environment, and a test environment. I can establish rules specific to these zones here. And then, last but not least, the application rules This is where my policy that I created exists. And this is where I’m going to create rules between applications, between application tiers, and between micro services. If I’m setting up distributed firewall rules for groups of individual VMs, this is where I’m going to do that. So we have these different categories, and these rules are enforced in order, correct? Ethernet rules first, then infrastructure, environment, and finally application. The rules in each of these categories are applied from the top down. So whichever rule is first in order, that’s the rule that gets looked at first. And so the first rule that traffic matches is the rule that’s applied. So, for example, if traffic matches the demo web rule, it’ll be automatically allowed. But if it’s not headed for one of these destinations, then it doesn’t match this rule, and it will move on to the next set of rules in my policy here.
And if it’s a match for any of these rules, whatever the first rule it hits that it matches, that’s the rule that it will be applied to. So I have to keep that in mind when I’m setting up these firewalls because the order in which I create these rules is very important. And a good practise when it comes to this or really any other firewall is to put the more specific rules towards the top of the firewall and then kind of put more of the catch-all type of rules, the more generalised rules, here towards the end. So you want the more specific rules to be applied first and the more generalised rules to be applied toward the end. Okay, so I made a rule, and we can see my Rick Demo policy and my demo rule Web here. And I applied that policy to the three tiers of web servers. Where did that group come from? Let’s take a look at the inventory tab. And as you can see, twelve groups have already been identified in the lab environment. And here are my three tiers of web servers. So this is a group of virtual machines. I can view the members right here.
So if I click on “View members,” you can see that there are three web servers that are included. We can see the IP addresses that are included in this group as well. So let’s create one of these groups just so you can get an idea of how to do this. I’m going to create a new group called the Rick Demo, and then I’m going to simply set the members of this group. Only, it’s not really so simple. There are a variety of different criteria that I can choose from here. Most simply, I can just choose members, right? So I can choose individual virtual machines that should be members of this group. I can choose groups that should be nested inside of this group. I can choose the segment ports that should be part of this group. I can also set lists of IP addresses that should be part of this group, or Mac addresses. I can have active directory groups. So if I want to enforce distributed firewalls based on somebody’s identity and their membership in an ActiveDirectory security group, I can do so here. But what I really want to focus on here is the membership criteria. I’m going to click Add criteria, and I can have a whole bunch of different criteria here that can force virtual machines to become part of this group. So, for example, if a virtual machine has an operating system name that contains Windows, the virtual machine could be dropped into the security group.
I could say if a virtual machine has a computer name. So if the virtual machine name contains the word “DB” or the letters “DB,” now it’ll be automatically dropped into this group. So if I knew all of my database servers had the characters DB in their names, I could create this group. And then I could create a policy that is applied to all of my database services very easily. So those are some of the different ways that we can work with these groups. I just created a new group here. I’ll go ahead and save it. We don’t need to publish that. So when I’m just creating a new group, we don’t actually have to publish that change. So now that I’ve created my little demogroup here, let’s take a look at it. I’m going to click on “View members,” and it looks like I’ve got one virtual machine that had the letters “DB” in the name, so that worked. Here’s another virtual machine with the letters DB in the name. That’s two database servers, and it looks like one virtual machine that must have a Windows operating system. So it looks like my group worked successfully. All right, let’s go back to security and go back to our distributed firewall here.
Another consideration is that if I make a mistake while making changes to the firewall, these changes may be difficult to reverse. It can be challenging to find individual statements in a firewall that are mistakes, and especially if I have to review the thing line by line in NSS 3, we have the ability to save our configuration. So I’m going to call it “pre-config,” and then I’m going to put today’s date, and what I’m typically going to do anytime that I work with the distributed firewall rule set is save a configuration so that I have a restore point. So now I have the ability to go back to a list of firewall rules without having to figure out exactly what I messed up. And just to demonstrate, I’m going to delete my policy. I’m going to publish this. I’m just going to refresh here to make sure it’s actually gone. And yes, it’s gone. And then I’m going to go to Actions here, and I’m going to view my saved configurations. And here you can see I have a saved configuration called “pre config,” and I can go back to that pre config saved configuration. I can see what the differences are between this and what I have currently configured, and I can load this configuration back up, and then I can go ahead and publish those changes if I like the way it looks. So that gives me a great way to protect myself when I’m working on this firewall and to give myself a restore point that I could very easily go back to if something has gone wrong.
3. Demo NSX-T Firewall Rules for a Three-Tier Application
In this video, I’ll demonstrate how to create NSS firewall rules for a three-tier application. And we’ll be using the free labs that are available at Hol vmware.com. So let’s start by going to the security area here and examining the distributed firewall rules that have already been created by default. And specifically, I want to focus on this particular firewall policy here for a three-tier app. But I also want to take a look at this final section. It is worth noting that the default layer of the firewall allows all traffic from any source to any destination. So basically, the only way that traffic is going to be stopped by this firewall is if it is explicitly blocked at some point. So I just want to point that out.
That default statement at the end there is very important. But what I do want to focus on in this example is the firewall rules for a three-tier application. And I’m just going to start by opening a new tab here in the lab environment. And under “three-tier apps,” I’m going to click on “Web One.” That’s one of my web server virtual machines. And basically, there’s an application and a database server on the back end. So if this web page is responsive, that means that my application is working successfully. My three-tier application is currently operational because I can access this website. So let’s make a quick change here to the firewall. I’m going to go to this default rule at the end, and rather than just allowing everything that does not match a prior statement in the firewall, I’m going to change this rule to drop.
So as soon as I publish this rule, here will be the outcome. Unless the traffic is permitted by some statement within this firewall, it will be implicitly dropped. The final statement, the default statement, is to drop all traffic that has not met one of these prior firewall rules. And so now I’ve published that rule. I’m going to go back to my customer database here and I’m going to try to refresh this. Let’s see if it works now. and it’s not working. So it’s pretty safe to assume that the three tiers of these rules are not working as designed. The traffic must have been matching this final firewall rule, this default layer, and that must have been what was allowing it through. So something’s not right here. One of the rules in my three-tier app must be misconfigured, and so we’re going to try to address that now. So let’s take a closer look at the rules that are contained within this part of our firewall configuration. And the first rule we see here is the rule for HTTP and HTTPS traffic, basically saying traffic from any source is allowed to hit these destinations. The three-tier web servers And I have Web 1A, Web 2A, and Web 4A.
So that statement should allow all of the traffic coming in from anywhere to hit those web servers. If we’re on http or https, the second rule is governing traffic from the web servers—that’s my source to the application servers. And we can see what TCP port 8443 is open. So we’re allowing traffic from the web servers to the application servers on TCP Port 8443. And I can see what these rules apply to here. They’re applied to the app servers, and they’re applied to the web servers. And then the third rule is a rule for traffic that is coming from the application servers, and the destination is the database servers. And we are allowing TCP Port 300:51 through. This rule also applies to app servers and database servers. So that’s my current firewall configuration. So it looks like I actually have the appropriate set of rules configured here. Why are they not working well with any of these firewall rules? We’ve got this little slider here that determines whether or not we should enable or disable the rule. So let’s open up this rule allowing us to access the web servers in this environment and try to reload this website and see if we can now hit our three-tier app. So it’s still not working.
At this point, we can safely assume our request is probably hitting one of those web servers. But because the web server can’t access the application server, maybe that’s what’s breaking it. Let’s go ahead and enable that rule as well. And also, let’s publish these changes so they don’t do anything other than simply enable the rules. I have to actually publish those changes before they take effect. So now I’ve enabled two of the three rules. It looks like it’s still not working, though. Let’s go ahead and enable the rule that is allowing the app servers to talk to the DB servers and publish this. And then we’ll go ahead and see if our web server starts responding now. And look at that. It’s back up. Okay, so at this point, all we’ve basically done is verify that, hey, if we’ve got these rules enabled and if the rules are configured correctly, this three-tier application works, right? All of the application components are functional, so as long as I enable the appropriate rules and they’re properly configured, this application is working correctly.
That’s really all we’ve done thus far. So now let’s experiment a little bit here. I’m going to select this three-tier app section of my configuration here, this policy, and I’m going to go ahead and delete it, and I’m going to publish that change. And then I’m going to go over to the inventory screen here, and I’m going to look at my groups, and I’m going to wipe out my App Servers group, my DB Servers group, and my Web Servers group. I’m going to delete those groups. So what I’m basically just doing is wiping out some of the pre-built configuration here that’s automatically created as part of the lab environment. I’m getting rid of these groups of my three-tier virtual machines, and I’m getting rid of the policies that were automatically created and present for me when I logged into the lab environment. And it looks like this particular group is still being used as a reference. Let’s go back to our distributed firewall here. And I’m actually going to delete all of these other policies as well. everything except for my default rule at the very end. And let’s go back to inventory and see if I can get rid of this final group. Now, I have three tiers of web servers. So there we go. It won’t allow you to delete a group if that group is referenced in a firewall policy. So the reason I deleted everything is primarily so that we can rebuild ourselves and ensure that the application works. So if we get the firewall settings correct, we should be able to get this back up and running. So let’s rebuild it. I’m going to add a group. I’m going to call my group Rick Demo Web Servers.
And now I want to add some members to this group. I could create some dynamic inclusion criteria. For example, if the virtual machine name includes “web,” that would be a great way to set up this group. I could specify the IP addresses of the members and the Mac addresses. I could set up Active Directory security groups here. But what I’m going to do is just go over here to the members and select the specific virtual machines that I want to include in this Web Servers group. And I’m going to pick Web 1, Web 2, Web 3, and Web 4. and I’m going to go ahead and apply that. So now I’ve added all of the web servers to this group. And so, yeah, I’ll go ahead and save this group. And now I’m going to follow a similar set of steps for my app and DB servers. So I’ll call my next group “rick demo app servers.” I’ll click on Set members, and I’m going to pick my individual application servers here. Under Virtual Machines, there’s App 1A. I’ll apply that and save that one, and then I’ll add a third group. I’m going to call this one Rick Demo DB servers. I’ll click on “Set members,” and again, I will pick some specific virtual machines as members.
And the only member I’m going to add here is myDB, a virtual machine, and I’ll save that as well. Okay, so now I’ve rebuilt my three groups. I have a collection of all of my virtual web servers, database servers, and application servers. So now it’s time for me to configure my firewall rules, and basically the rules that I’m going to setup are that I’m going to allow any source to send HTTP and HTTPS traffic to my web servers. I’m going to allow the web servers to send some traffic to the app servers on a specific set of ports, and the app servers will be allowed to send the DB servers’ traffic on a specific set of ports. The web servers cannot directly communicate with the DB servers. And so we’re basically forcing traffic to trickle through our three-tier application, and we’re not opening any firewall holes that we shouldn’t be opening. So let’s go back to security. And under security, I’m going to go to my distributed firewall, and I’m going to add a new policy. So I’ll click on “Add policy here” and I’m just going to call it Rick Three tier. This is going to be the name of my policy, and I’ll expand it here. There’s nothing in there thus far. So that’s my new policy. And so I’ll click on the little ellipsis next to it, and I’m going to choose Add Rule.
And the first rule I’m going to create is going to be for my web servers. And what I’m going to do is allow any source to send traffic to a specific group. The group that I created was called RicdemoWeb Servers, and I included my four webserver virtual machines in that group. So I’ll apply that. That is the destination. As far as services go, I’m going to want to enable HTTP and HTTPS. There we go, HTTP and HTTPS, I’ll apply that, and I’m going to make sure that this rule is set to allow me to focus this rule on specific groups as well. So under “applied to,” I’m going to choose some groups, and I’m going to choose my Ric Demo Web Servers group. And so I’ll go ahead and apply that there, and I’ll make sure my rule is enabled, and I’m going to go ahead and publish that change. So now I have a rule allowing traffic to those web servers. Let’s add another rule. So now that I’ve opened up traffic to my webserver VMs, let’s try to do a quick refresh on this website and see if I’m getting any response. And so far, we’re getting nothing. And the reason that we’re still getting nothing is because we have not allowed any traffic to our application servers. And just to kind of reiterate the effect of this default rule one more time, if I were to change this to “allow” and “publish,” then guess what’s going to happen? Everything is going to suddenly start working. But that’s not what I’m looking for here. I only want to let specific traffic through. So this is essentially what we call a “whitelist” approach. By default, everything is going to get dropped except for the traffic that we specifically whitelist. So let’s add a second rule here. I’m going to click on “Add Rule,” and I’m going to start to allow some traffic into my application servers.
And what I specifically want to allow is traffic from my web server virtual machines. So people are going to connect to the web servers. The web servers are going to communicate with the app servers. So the web servers will be the source. The destination will be my group of app servers. So I’ll choose those as the destination, and then I’ll choose the services. And I believe it was port 8443 that we needed to open up between the web servers and the app servers. So I’m going to go ahead and look for that 8443. And there it is: TCP port 8443. So I am going to choose that port and allow it. And so now, for app servers, we’re allowing traffic from the web servers to the app servers on port 8443. I’m just going to apply this to the entire distributed firewall. But again, I could target these two groups and point the configurations right at them. So now I’ve got a rule for my app servers that allows my web servers to communicate with my app servers. And let’s just test this again to make sure it’s still not working. It shouldn’t be working yet because I haven’t enabled the necessary rules to reach my DB tier. still not working. So let’s add one more rule to this policy. I’m going to go ahead and add a new rule. I’m going to call this rule “DB Servers.” And I’m going to say that from my application servers, I want to allow the application servers to communicate with the database servers, and I want to do so on a specific set of ports.
Now I am going to do something a little bit different here. So for the destination, I’m going to choose Ricdemo DB servers. So the source is my app servers, and the destination is my DB servers. And for services, like I showed you in the last couple of rules, I can pick a pre-built service in here, or I can go ahead and click on “Add Service,” where I can create my own custom service and pick specific protocols that I want to allow through. So I can define my own custom services, specifically for some of my own applications that don’t fit neatly into the predefined services that are already built here. So I just wanted to show you that we do have that ability to create our own custom services as well. But I’m not going to actually create a custom service here. So for my DB server, the source is my app servers, the destination is my database servers, and I am allowing that traffic through. And I’m going to click on “Publish here.” And now that I’ve established those three guidelines, here’s what I’ve come up with: Last time I tried to load up this three-tier app, Let’s try it again now. And look at that. It’s back up and running.
Okay, so now that we’ve got this working, let’s go break some stuff. Let’s go back to our distributed firewall. And you’ll notice that on the App Servers rule for Applied To, I chose the entire distributed firewall. Let’s modify that. Let’s apply that configuration to some specific groups, namely, the group of virtual machines on the Demo App Servers. And so I’m going to go ahead and click on “apply” here and publish this change. Now, I anticipate this is going to break my three-tier application. And the reason that I believe it’s going to break is because the source of my traffic is the web servers. So as traffic leaves those web server virtual machines, the distributed firewall is analysing that traffic as it leaves the network interface. And so the rules allowing that traffic to the app server are not actually instantiated for that group of virtual machines. So let’s see what happens here. If I click on Refresh and it looks like my application is broken yet again, my little spinner is just spinning there. So let’s go back to the AppServers rule for Applied To. Instead of just selecting the Ricdemo App Servers, let’s add in the Ricdemo Web Servers, and let’s publish that configuration and see what happens to my three-tier application.
Now, there we go, back up and running. It loaded instantly. So the groups that you apply these rules to have to be the ones that are producing outbound and inbound traffic. And the goal here is to target these rules at specific groups of virtual machines for a couple of reasons. Number one, we don’t want to unintentionally open traffic on certain groups of VMs that shouldn’t be open. So that’s goal number one: to kind of target that configuration at specific groups. But goal number two is to kind of keep our firewall kernel modules more efficient. So I don’t want to have thousands of rules applied to every single virtual machine. I want to target the rules that apply to the virtual machines that they apply to. So in review, we had to apply this to both of those groups because these rules are being enforced as the traffic is leaving the source VM, on that source virtual machine network interface. And then the rules are being enforced yet again as the traffic arrives at the destination virtual machine’s virtual network interface as well. So now we’ve learned about how the distributed firewall configuration works. In the next video, we’re going to move right on to the Gateway Firewall. And I would recommend moving on to that video immediately because many of the concepts that we’ve learned about here will continue to be reinforced in the Gateway Firewall lesson.