Vmware 2V0-41.20 Topic: Security Part 2
December 21, 2022

4. Demo Configure the NSX-T Edge Firewall

In this video, we’ll configure the NSS 3.0 data centre gateway firewall, and I’ll be using the free labs available at hol.vmware.com to demonstrate these tasks. So we’re going to reconfigure the North and South firewalls. This is basically the perimeter firewall for our NSX domain, and the rules that we create here are enforced independently of the distributed firewall. So the configuration that we create here has nothing to do with the configuration of the distributed firewall.

So the configuration that I apply here is applied at the edge, and we can apply this configuration to either a tier-zero gateway or a tier-one gateway. I can create edge firewall rules for either of those types of gateways. So we have to have a service router, and that’s where these rules are actually going to be applied. So let’s start by clicking on the security link, and under there, let’s click on Gateway Firewall. And let’s start by looking at all the shared rules. And at the moment, as you can see, there are no rules found here. Now here you can see the order in which these rules are going to be enforced. And over here, at the far right, we can see the last set of rules that are going to be enforced, which are our default rules. So we can see these policy default rules here. And basically, the default rule is kind of similar to what we saw out of the box with the distributed firewall. Traffic from any source to any destination, any service—all of that traffic is going to be allowed by default.

So this edge firewall has what we call an “implicit allow all.” At the end, if traffic does not match a firewall rule, it’s going to automatically be allowed. This is what we would call a “blacklist” approach. What I need to do if I’m going to keep these as my default rules is go through my firewall and block everything that I think should be blocked. I need to explicitly block all the traffic that should be blocked. This is not a security best practice. The best practise would be to say, “Hey, let’s go to this edge firewall, this gateway firewall, and let’s drop everything.” Anything that has not been explicitly allowed should be blocked by default. So that’s just something to kind of consider with your firewall design here. The security best practise is to basically say, “Hey, we only want to allow the traffic through that we have explicitly created openings for.” And if my environment had been configured with a Tier 0 gateway and a Tier 1 gateway where I had a multi-tier routing topology, I could examine the rules that were specific to each of those gateways.

So in a prior demo that I did here, I created a tier-zero gateway for the VPN demo. I can apply specific rules to that tier-zero gateway or to the tier-zero gateway that was created by default in my lab environment. So we can create those rules specific to one particular gateway. So, in accordance with all shared rules, I’m going to draught a new policy. And I’m just going to call the new policy “Allow Web.” And if you watched the last video, we had this web server that was the front end of a three-tier application, and we were accessing it from a browser tab. So now I want to open up and allow traffic to reach that web server. So I am going to add a new rule here. And the first rule that I’m going to create is going to be called “Block All Web.” And what I’m going to do is, under destination, I’m going to choose that group of web servers called Rickettdemo web servers that I created in the prior video. and I’m going to apply that rule here. And then I’m going to choose where this rule is applied to.I’m going to pick my Tier 0 gateway, and I’ll go ahead and click on Apply. So now I’ve created my first rule. I’m going to change the action. Instead of allowing this traffic, I’m going to choose “Drop.” Now one thing I did want to point out here,  under “applied to,” is that I do have the ability to pick specific uplinks on my Tier Zero gateway as well. So I can apply certain rules to one uplink and then apply different rules to another uplink. I’m not going to do that here. I’m just going to apply this rule to the entire Tier Zero gateway.

So before we actually publish this rule, let’s just do a quick test. I’m going to open a new browser tab. I’m going to try and hit my three-tier app, and it is currently working. Let’s go back, let’s publish this deny rule that I just created, and let’s open a new tab yet again. And let’s see how my three-tier app looks now. And there we go. I can see that it’s now blocked, so it’s no longer functioning. Okay, so my rule had the desired effect. And what I now want to do is open up this web server to a specific range of IP addresses. Specifically, I want to be able to hit it from this particular console. So I’m going to go to the command line of my console and I’m going to type an IP configuration. And here is going to be the IP address that my traffic is going to be originating from: 192168 110 ten. So now that I know the IP address of my console, let’s go back here and add another rule to this policy. I’m going to call this new rule:

Allow Weband I am going to specifically allow traffic from the source subnet, which is my console subnet. And so I’m just going to click on “Add Group” here, and I’m going to call this new group Console Subnet. And then I’ll click on “Set Members,” and under “Members,” I’m going to choose IP addresses, and I’m going to go ahead and enter the IP addresses here. 192168 110, ten 00:24. So what I’m doing here is specifying to allow traffic from the entire subnet that that console machine resides on. And I’ll just go ahead and scroll down here, and I’ll save these changes and click Apply. So now the source is that range of IP addresses that I just put in there. Let me go ahead and specify a destination. The destination, again, is going to be my RIC demo web servers. So that’ll be my destination. I’m going to apply this again to my Tier Zero Gateway, just like the last rule that I created. And I’m going to specifically allow that traffic through. And one other change that I’m going to make before I apply this under services for my AllowWeb rule is that I’m going to specify Http and Https. So I’m only going to open up the ports that I actually need to open up here.

So here we go. HTTP and https, and here we can see the services that are going to be allowed. And so now I’ve got this “allow web” rule. And what I’m actually going to do is just kind of drag it down here a little bit. I’m going to place it underneath the block all Web rule that I created, and I’m going to click on “Publish.” So now I’ve published these changes, and I would expect that my three-tier app is still not going to work. And the reason that it’s not going to work is because these firewall rules are processed in order from the top down. So the block rule came first, and the traffic got blocked. It never made it to this other rule. Let’s change the order of our rules here. We’ll put the Allow rule in first, publish that change, and let’s see what my three-tier app looks like now. Is it now working? Yep. We’re back up and running. So what we’re now doing is enforcing a set of rules at the edge. Those rules are not going to be enforced inside my NSX domain. If I’m trying to go from the web tier to the app tier or the app tier to the DB tier or something like that, these firewall rules are not applied within that NSX domain. These are at the edge. These are at the north-south border. So the gateway firewall is my north-south firewall. This is traffic coming into and leaving my NSX domain. Whereas the distributed firewall—that’s my east-west firewall—is controlling traffic as it moves around inside the NSX domain.

5. NSX-T Introspection Services

What we’re looking to accomplish here is to integrate third-party services with Nsxt. And the third-party services on which we’re focusing in this lesson are network introspection services. We’re thinking about things like next-generation firewalls with intrusion prevention or intrusion detection systems—things like that—that will actually monitor network traffic itself. In the next lesson, we’ll learn about endpoint protection and how we can actually look inside the operating system of our virtual machines. Because we’re currently focused on network introspection and network traffic monitoring, we’ll divide this into three sections: north, south, and east-west.

So let’s start with north and south. And here’s our little diagram here. And we’re going to kind of fill out this diagram as we move deeper into the lesson. But you can see here that we’ve got a transport node, a virtual machine connected to a segment, and our physical network here. Our gene-encapsulated traffic is flowing over this physical network. Then we’ve got an edge node here. And this diagram is going to assume that we’re using single-tier routing, but you can deploy north-south introspection at either a tier zero or a tier one gateway. So, yeah, what we’re essentially looking to do here is deploy network introspection at the edge. So we want the traffic that’s flowing in and out of our NSX domain to be analysed by some third-party solution. And the mechanism that we’re going to use for that is something called an SVM, a service virtual machine. So think of this virtual machine as essentially an appliance that your traffic is going to pass through. We’re going to redirect traffic through this SVM so that it can be analysed against the rules of the SVM. And as such, we want it to provide the absolute minimum possible latency to the edge node. So ideally, my edge node and my SVM should be deployed on the exact same ESXi host, and it must be deployed on an ESXi host as a transport node.

The SVM cannot be on any other type of transport node. It’s got to be on an ESXi host. So as part of this process, we’ll create this partner service virtual machine. It says VM, and it will have its own user interface and its own management component that we will use to register this solution with NSX Manager. And once the solution is actually registered with NSX Manager, we can start to configure traffic redirection so that traffic will actually be redirected through this SVM. And again, this is why you want it on the same host as the edge node: because we’re going to be redirecting traffic through this SVM, we don’t want to send it across a physical network unnecessarily. Ideally, we’d like to keep it all on the same host. So let’s take a quick look at the documentation because I want to talk about this traffic redirection process and how we can selectively identify which traffic should actually be forwarded or redirected to the SVM. So we can create redirection rules for North-South traffic. But essentially, here’s the bottom line: We can specify membership criteria like certain virtual machines, certain IPRMac addresses, or Active Directory groups. So basically, we may not want all virtual machine traffic to be redirected through this network introspection system. We may only want certain groups of virtual machines to actually send their traffic there.

And so if it doesn’t make sense for a certain group of VMs, then we won’t bother with this North-South network introspection, and we won’t introduce latency unnecessarily. So that’s North-South Network introspection. And again, that’s really happening at the edge. So as traffic is coming in or out of our MSX domain, that’s when that North-South network introspection is enforced; we can also do East-West network introspection. So here in this slide, you see we have virtual machines running on different ESXi hosts. And they could be on the same segment, or they could be on different segments. Whatever the situation is, we’ve got east-west traffic flowing between these hosts over our physical underlay network. So we’ve already talked about the distributed firewall. And so, as soon as traffic leaves a virtual machine, it is immediately compared to the rule set of the distributed firewall. Well, maybe at that point we also want to force that traffic to flow through some kind of third-party solution, like maybe a next-generation firewall that’s going to analyse all the way up to layer seven. And so it could be that traffic is simply flowing from this VM to a VM on another host. It’s not going north-south; it’s just going east-west.

And so yeah, we have the ability to set up network introspection on this east-west traffic as well. And again, this is going to require a partner service VM, an SVM. And there are a couple of different ways, from an architectural perspective, that we can deploy these SVMs. In this slide, you can see that we have deployed an SVM on each transport node. Or potentially, we could identify a service cluster of VSXi hosts and deploy the SVMs there. So for the rest of this discussion, let’s assume that we have a service cluster and that the SVM is going to be installed on that service cluster. And so we’re going to have to instal a partner service manager, basically registering this SVM to the NSX manager. You’re essentially registering this service so that it can be consumed by Nsxt. So as I continue on here, I’m going to talk about the steps required to instal and deploy this successfully. All of those steps are included here in the documentation. So first off, you’ll select the compute manager and the transport nodes that the service VM will actually be deployed on. You also have to pick the data store and management network for the SVMs themselves.

So that’s part of the process, which is basically identifying, “Hey, where is this SVM?” going to actually get installed? What data store is it going to be deployed on? Which management network is it going to connect to? That’s kind of getting the initial foundation up and running here for this East-West network introspection. And then, once you’ve got that ready to go, you’ll specify service segments. And the service segments are actually identifying the networks where we want this protection to be enforced. And at this point, the security service is basically ready to consume, but we still need to add redirection rules. So the redirection rules, again, are going to identify which traffic should actually be redirected through this SVM. And that’s an important piece of this puzzle here because the traffic is actually going to have to flow through this SVM, even if it’s East West.And that’s one of the reasons why some organisations choose to deploy an SVM on each and every ESXi host. That way, you’re not sending that traffic over to an SVM running on a dedicated cluster. So if the SVM exists locally on this ESXi host, the virtual machines’ traffic can be analysed directly on the source host. And if a rule needs to be enforced, it’s enforced right there in the source code.

So there’s one final concept that I want to mention here, and there’s documentation to walk you through this as well. So in the same documents that I shared with you a moment ago, you can scroll down a little bit and you can find “add a service chain.” And the service chain is basically determining the order in which our security solutions are enforced. So, for example, here’s my virtual machine, and traffic is going to be flowing out of it, and that traffic is going to be compared to the rule set of the distributed firewall. And if the distributed firewall has determined that this traffic is getting blocked, that’s the end of the story. Traffic gets denied by the distributed firewall. We’re not moving any further than that. But let’s assume that the traffic successfully passes through the distributed firewall and that it is allowed. Well, at that point, the traffic needs to be analysed by other security solutions as well. Maybe, for example, the next solution in the service chain is an intrusion prevention system. And if the traffic matches a known attack signature, it’ll get blocked. Or maybe the third link in the chain is a next-generation firewall that analyses traffic all the way up to the application layer.

So that’s what a service chain is. It’s basically a sequence in which this traffic is going to be analysed by the different protection solutions that we have available. And you can configure a service chaining policy to control the order in which these security solutions are enforced. Last but not least, I’d like to take a moment to discuss Nsxt Two Five and some of the features that have been introduced in Nsxt Two Five, and more information on this can be found in the NSX T Two Five release notes. But the first thing I want to mention is packet copy support. So rather than actually redirecting traffic through a service, packet copy support can simply send an additional copy of packets toward a service virtual machine. And so this allows the monitoring to happen without actually forcing the original packet to flow through the network monitoring service. That is unique to the NSX-250. SVM deployment automation is also new in NSX-V. So in NSX-2-5, there are two ways we can deploy the SVMs. We can do it the same way we did in Two Four, where you specifically choose the cluster where they’re going to be deployed. Or you can do a host-based SVM deployment where one service virtual machine is deployed on every single compute host.

And in this case, when you add a new compute host to a cluster, the appropriate SVMs are automatically added to that new host. The dynamic grouping based on tags for north-south traffic is also supported in Nsxt 2-5. And we’re going to talk a little bit about endpoint protection in the next lesson. But basically, here’s what’s going to happen with dynamic grouping: Let’s say that endpoint detection finds a virus or malware on a virtual machine. When that occurs, we can automatically tag that virtual machine. So for example, let’s just draw a quick diagram here. Here’s a virtual machine. Let’s say that we have endpoint protection running and a virus is detected on this VM. Well, endpoint protection is actually monitoring the operating system and finding these viruses. And at the point where endpoint detection detects some kind of problem malware, a virus, or something like that, it can apply a tag to this virtual machine, and we can use that tag to dynamically drop this virtual machine into some sort of group. And that group can have a specific policy assigned to it. So, for example, if a virus is detected, the VM is tagged. So now the VM is dynamically included in some group, and that group has a policy assigned to it. Maybe the policy for this particular group is to completely isolate this virtual machine and only allow this virtual machine to communicate with my security solutions.

So this can automate a security posture when a virus is detected to say, “Hey, a virus is detected on this virtual machine; let’s apply a whole new set of rules to this VM dynamically in response to that virus being detected,” so that now no traffic is allowed in and out and the virus can’t really do anything malicious. So this dynamic grouping was supported east-west. Two four in nsxt. Next two five. It is also supported on the North and South sides. And so, as you can see in the inventory area of NSX-T, we have the ability to create groups, and we’ve seen some of these groups throughout this course. However, one of the ways that you can add members to a group is based on tags. So you can see here that dynamic inclusion of groups can be based on tags. We saw in some of our previous demos that I set up dynamic inclusion based on a machine name, an OS name, or a computer name, but they can also be based on tags. So if our endpoint protection is tagging virtual machines, now, based on those tags, the virtual machine can be included in a certain group, and we can have a certain list of firewall rules that are applied to that group.

6. Endpoint Protection

To protect the operating system of the guest virtual machine itself. So we’re analysing the OS and the file system for potential threats, and we are going to integrate with third-party solutions to accomplish this. So we’re going to use some kind of third-party anti-malware or antivirus solution to actually analyse what’s going on inside the operating system of our virtual machines, and it is agentless. There is a guest introspection driver that needs to be installed on the protected virtual machines, but agentless endpoint protection is supported on both Windows and Linux VMs. This is new in Nsxt 2.5, and Windows and Linux VMs are supported in all Nsxt versions after 2.5.

And endpoint protection is ideal for Horizon if we’re doing desktop virtualization. And this is because with Horizon, you’re going to have many virtual machines running on a single ESXi host. You may have tens or hundreds of virtual machines running on each and every ESXi host, and there’s a very small, thin agent running inside of each of those virtual machines. And this is really important because if you were to run antivirus software in each of those virtual desktops, it would increase the number of resources required for every single desktop, and therefore you wouldn’t be able to run as many desktops per ESXi host. And it’s the same with server virtual machines. If you’re running this software in every single VM, then every VM has to consume resources to make it happen.

So by running a partner service virtual machine on each host, the service virtual machine consumes much less CPU and memory overall than running agents on every single virtual machine. And so let’s take a quick look at the Nsxt 3.0 reference design, and here you can see the components that actually run in endpoint protection. So we of course have our NSXManager cluster, and we of course have our virtual machines, which we’re calling VDI here. So the assumption here is that we’re using Horizon to do desktop virtualization, but the process is the same for server virtual machines, and so we’ve got this little thin agent running on each of these virtual machines, and the thin agent consumes much fewer resources than running a full agent for antivirus software would. Then there’s the SVM partnership. The partner SVM is running on each ESXi host, and it’s doing basically all of the processing work for this endpoint protection solution. So we’re offloading many of those processing tasks to a virtual machine that runs on each ESXi host. The process of setting this up is probably going to be pretty similar to what you have experienced with some of the network introspection services. For example, we’ll deploy the partner service and register it with NSX Manager, and the partner services will be deployed on an NSX-prepared host.

So remember, early on in this course, we talked about the host preparation process and how, when we enable NSS on a host, certain modules are installed. This is one of them. So in order to actually protect the operating system of our virtual machines, we need this guest introspection driver installed. And so we need this GI driver, or guest introspection driver, installed on virtual machines. And we’ll use VMware tools to instal an introspection agent on every guest virtual machine, which is going to allow the virtual machine to communicate with the guest introspection agent. And so essentially the purpose of this is to allow us to understand when virtual machines are trying to do things like access files, so that we can let the service virtual machine of the guest introspection service know about those events and implement the appropriate protection policies. So, for example, if a file has a virus or some other kind of malware component, we can deny file access to that file or we could potentially fix the issue on the spot. So in order to enable this service, we’ll use a policy. So basically, we’re protecting these virtual machines by performing antivirus and anti-malware tasks. The policy creation process allows us to specify the groups of virtual machines that are actually going to be protected by the solution.

So underneath the service, we’ve got these service VMs. These service VMs are going to run on each host within the cluster. So we’re going to have these service VMs that are deployed using OVF templates and running on each ESXi host. We have this guest introspection agent that the guest virtual machine actually communicates with. And so basically, the role of these components works like this: We’ve got our guest introspection agent; this is the kernel module that is actually installed on the ESXi hosts, and it’s going to receive information from the guest VMs and forward those messages to the service VM. And again, this is natively installed on ESXi as a Vsphere installation bundle. It’s a kernel module, much like the distributed firewall, and works similarly. We’re enforcing this endpoint protection using a kernel module running on the ESXi host. And the SVM is constantly getting the latest antivirus updates. It’s up and running 100% of the time. And so we’re constantly getting updated signatures and the ability to provide current, up-to-date protection for those virtual machines. And if you want to delve deeper into the installation process, you can continue on into the documentation. Here, you can see the endpoint protection workflow. where it shows you the entire installation process of registering the services, deploying a new service, adding a service profile, and setting up policies for guest introspection as well. So all of this stuff is included in the documentation. Now that you have it all set up, you are in a position where you are getting automatic protection. And what I mean by that is that if you instal a new ESXi host, that ESXi host is going to get the endpoint protection components during host preparation. And as long as your policy, which you’ve defined, is inclusive of those VMs on that new host, They will be automatically protected by the antivirus or antimalware solution that you have deployed.

7. Demo: Configure NSX-T URL Analysis

In this video, I’ll demonstrate how to utilise URL analysis in Nsxt 30. And what we’re doing here is categorising certain traffic and certain URLs. We’re not actually filtering this traffic; we’re just inspecting it and reporting upon it. It is not possible to drop or allow traffic, but it is possible. Now it may potentially be possible in the future. This is really the first iteration of this feature set. So what we’re going to do with URL analysis is enable it at the NSX edge cluster. And as traffic flows out of my NSX domain towards the Internet, we can perform URL analysis on that traffic. We have to have the layer 7 firewall configured in the Tier 1 gateway in order for this to work. So let’s take a look. And, as you can see, I’m logged into the Nsxt user interface and using the Freehandson labs from Hol vmware.com. I’m using an advanced security lab for Nsxt 3.0.

So we’re going to go to the networking tab here, and I’m going to click on my Tier 0 gateway. And as you can see here, we’ve got a Tier 0 gateway configured in this environment. All of my segments are connected to the Tier Zero Gateway. So this is what we would call a single-tier configuration. So yeah, I’m going to go to my Tier 1 gateway. I’m going to click on the little ellipses next to it, and I’m going to choose Edit. And I just want to look at the edge cluster. So I could potentially modify the edge cluster here. I’m not actually going to change the setting, but we can see that this is running on edge cluster one. Okay, so that’s where my Tier 1 gateway service routers exist, on edge cluster 1, which we can see right here. Okay, so let’s go ahead and go down to our segments here. And we can see that we already have a number of segments. Let’s take a look at the LS app segment. You can see that it’s connected to the tier zero gateway on an overlay network, my overlay transport zone. So that’s the segment that we’re going to focus on here, the LS app segment. Okay, so off to the security section. But I just want to lay the groundwork here. I’ve got a Tier 0 gateway. I’ve got a segment attached to it called the LSAPP segment. So let’s go to the security section here. And under security, I’m going to click on URL analysis, and I’ll just choose “Get started.” And we’ll start here, at the settings tab. And as you can see, we already have an edge cluster listed here. So I’m going to go ahead and enable URL analysis on this edge cluster. Go ahead and hit “yes” here. And now the URL analysis state has been changed to enable.

So now that we’ve enabled URL analysis, let’s go ahead and click on “Set.” And we’re going to add a new context profile. This context profile will be known as Rick Demo. And under Attributes, I’ll click on Set, and I’m going to add a URL category attribute. So here are all these different categories that are potentially URLs that we may want to stop people from visiting. I’m going to pick social networking, so I’ll choose social networking. And maybe I don’t want people looking at sports; maybe people are checking their fantasy football, getting too much spam, and things like that. So I’ll go ahead and pick a few of these categories here, and then I’ll go ahead and add these categories, and I’ll apply those changes. So now I’ve set my profile, and I’ve set the categories of URLs that I want to monitor for. And so I’ll go ahead and click on “Apply here.” So I’ll go ahead and click on “Save here.” And now that I’ve created my new context profile, I’ll go ahead and click on Apply. And there we go. Now I’ve got a profile that’s assigned to this edge cluster. So the next step here is that I need to create a layer seven rule so that my Tier 1 gateway can successfully monitor these URLs. So basically, the way that Urs analysis works, I have to create a rule to capture all of the DNS traffic, right? My firewall isn’t going to examine every packet to determine which URL people are trying to get to. We’re going to simply capture the DNS traffic so we can find out exactly which URLs everybody’s going to without necessarily digging into every single packet. So let’s create this layer-seven rule under North South Security here. I’m going to go to my gateway firewall, and I’m going to select my tier-zero gateway. So here we’ve got our Tier Zero gateway. I’m going to click on “Add Policy,” and here’s my new policy. I’m going to make the name Rick Demo URL, and I’ll click on the little ellipsis next to it here. And I’m going to add a rule. As a result, I’m going to refer to the rule as DNS demo. I’m going to leave my source and destinations as is. But for services, I’m going to select some DNS services here. So specifically, I want to monitor for DNS traffic.

So I’ll select the DNS for TCP and UDP. And this is going to be applied to my Tier 0 gateway. I’ll go ahead and click on “Apply” there, and I’m going to “Allow,” and then I’ll go ahead and click on “Publish.” So now I’ve created a layer-seven DNS rule, and I can now start capturing some of these URLs and analysing them. And once I actually start getting some traffic here, we can take a look at the URLs analysed here. And it may take some time to capture enough information for this to be useful. And if you’re interested in setting this up in your own environment, within the Nsxt 3.0 documentation there’s a URL analysis workflow, which is basically going to mirror exactly what I just went through. I enabled URL analysis. I configured a custom URL analysis profile. I created a layer-seven DNS rule. And the next steps are to start generating traffic and review the URL analysis. So the documentation will walk you through exactly what you need to do here, step by step, to get this completed.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!