Amazon AWS Certified Solutions Architect Professional SAP-C02 Topic: Design for New Solutions Part 10
December 16, 2022

86. Implementing Path Based Routing in ALB

Hey everyone, and welcome back to the KP Labs course. So in the earlier lecture, we had discussed the application load balancer and one of its features, which is path-based routing. So we had the demo. So what we’ll do in today’s lecture is actually configure our first application load balancer. So in today’s lecture, we’ll be configuring our firstAl along with the path-based routing. And we’ll also look into the feature of registering IP as a target. Perfect. So before we can implement path-based routing, what we’ll do is assume we already have two servers over here, and within those two servers we need to have these two separate directories.

So let me give you an overview. So we already have two servers, Kplab One and Kplab Two. Now, we’ll have a directory call of “slash images” within the one server, and a directory call of “work” within KPLAB 2. So let me actually show you. So, if I navigate to User share NGINX HTML, you’ll notice that I already have a directory called “Images.” You can simply run the Mkdir command and create this directory. Now, within this directory, I have an image that is Galaxy GPG, and you can have any image that you intend to have similarly on the second server. So this is the KP Lab 2 server. I’ll go to the NGINX HTML User share, and if I do that, I’ll have a directory for college work. I also have a file called collarswork TXT at work. So what we’ll do is change the contents. I’ll say I enjoy travelling but dislike working. Perfect.

So let’s quickly verify whether we can actually access both contents from the server. So, images, galaxy, it appears that this is working. I’ll copy the IPV4 address of the second server, and I’ll go to work TXT and see if I like travel but no work. Perfect. So as far as the server side is concerned, things seem to be working perfectly. So now we can go ahead and implement the application load balancer. So go to the load balancers, click on “Create a new load balancer,” and this time we’ll be selecting the application load balancer. Go ahead and click on “Create.” So I’ll refer to this as KP Lag ALB. Now, the scheme can be both Internet-facing and internal. I’ll use the internal load balancer protocol. I’ll put it at 80 availability zones. I’ll just select two availability zones, and I’ll click on “Configure the security groups.” So I’ll just use the default security group, which allows 80 for everyone. The routing must now be configured. So within this, just give a name.

I’ll just give a random name that you intend to use. I’ll just say external and internal, if you will. You can see that you have the target type of either instance or IP over here, so you can have either of them. For the time being, I’ll go with IP. Now, since I had selected the IP address, I have to put in the IP address of the two EC instances. So let me put in the IP address. I’ll put in the IP address over here. As Oops points out, it must now be a private IP address. My mistake. So once you put in the private IP address, just click on “Add to List,” and this will automatically get added to the list. Go to Review and click on ut in the privaSo now we have an application load balancer, which is configured. So now that we have added the IP address of this instance, what you can do is, whenever you go to the DNS name associated with it, it will open up this specific IP address. So let’s go to index HTML. So it takes a little time for the resolution to happen. So till that time, let me confirm whether the resolution is fast.

You see, it has not yet been resolved. So generally, it resolves quite quickly. But today and even yesterday, the name resolution has been taking quite a while. Anyway, so till the time this specific ALB name gets resolved, what we’ll do is look into how we can implement the images and work with routing. So within the load balancer, just click on Target Groups. So this is the new target group that was configured. So let’s create a new target group. This will be called “images,” and the target type you can have, for example, is based on IP address. I’ll select an IP address for the time being and click on “Create.” So now you have a target group. Now, within this target group, you have to configure the instances. Now for the images: I have the Kplab one. So I’ll copy Kplab one’s IP address and add a new target within the target. Perfect. And I’ll go with register. So this target has been registered successfully. Similar to this, I’ll create one more target group. The target group name would be “Work.” The target type would be IPAddress, and you would click on “Create.” This work should now be assigned to the Kplab 2 instance within the work.

So I’ll just copy the private IP, and I’ll register this with the target group. Perfect. Great. So we now have two target groups. One target group is of Images. So this is the target group, and the easy instance associated with this target group is KP Lab zero one. Similarly, we have another target group known as “work” or “work.” And this target group is associated with KP Lab zero two. Now, what we have to do is associate these two target groups with the ALB rules. So in order to do that, go to the load balancer and go to the ALB that you have created. Select listeners. So there is one listener that is configured. Click on View and Edit Tools. So there is already one rule in place.

So this is the default rule. So just click “Add,” and here we’ll insert a new rule. So here we’ll select the path pattern. Path patterns would be images. And so what this basically means is that within the URI, if there is an image that is present, then forward this to a target group that is “images,” and I’ll click on “save.” Perfect. Now, similar to this, I’ll add one more rule: where the path pattern is work, anything that comes with the Uri of work should be forwarded to the target group of that is “imagSo this is a nice little set of path-based routing rules that we had configured for our ALB. So now let’s quickly verify whether the DNS name is now resolving and whether it is indeed resolving. So I’ll copy the DNS name of this ALB, and now that I have the DNS name, let me quickly show you why this page has actually come up.

So, according to the listeners and the editing rules, there are three rules. So, if it’s work, and Uri has work, it’ll go to the Work target group. If the Uri has images, it will go to the Images target group. However, if the Uri does not have anything, then it will show you the default page. So this is the default rule that is added over  work, it wilSo let’s move on to Images. I’ll say galaxy. Jpg, you see, seems to be working perfectly. Let us now attempt to work. PXT, and again, this seems to be working perfectly. So this is what the path-based routing for the application load balancer is all about. So simple, but extremely effective. So this is it. I hope this lecture has been informative for you, and I look forward to seeing you in the next lecture.

87. ALB – Listeners & Target Groups

Hey everyone, and welcome back to the Kplabs course. So in today’s lecture, we will be discussing the listeners and the target groups. These two concepts are critical to comprehend when it comes to application load balancing and even network load balancers. So let’s go ahead and understand this in much more depth so that our concepts are much more clear. So in simple terms, listeners are basically the processes in the load balancer that check for the connection request. Now, listenership works based on two aspects. One is the protocol, and the second is the ports.

So before we begin understanding more, I’ll just give you one example. So if you go into a classic load balancer, you will see there are already listeners, and the listeners work based on protocol and port-based connections. And the same part goes with the application load balancer and the network load balancer as well. So I hope you already know what listeners are all about. So one of the examples is HTTP and port 80, or maybe HTTP and port 80. So these are various listener configurations that we can have. Now, the new concept is that each listener is associated with the target group. Now, this is not part of the classic load balancer but is part of the next generation of load balancers.

So you create a listener. So there is a default listener that is added to the application load balancer, and the same listener gets connected with the target group. Now, the target group again gets associated with the instance ID. So what you do is create an application load balancer with a default listener. Now, the default listener will not have any instances; it connects to the target group, and the target group in turn connects with the instances. So I’ll give you a reference with the classic load balancer again. So, in these circumstances So you see, within the classic load balancer, you have two important tabs. One is the instances tab, where you can add or remove the instances, and the second is the healthcheck tab, where you can configure the health checks. Now all of these tabs—these two tabs, in particular—are within the load balancer console itself. However, for the application load balancer, you can see those two tabs are not here. And the same goes for the network load balancer. Those two tabs are not here.

So the question is, where are the instances and the health check-related configuration done? And these configurations are based on target groups. So you configure those two configurations within a target group, and then you attach that target group to the listeners. So within the load balancer, they added a new tab called target group.

So this is the target group. As a result, whenever you create a target group, you can configure the instances in the VPC using the instance IDs. And you also have the protocol and the port-related information that can be configured over here. So this is a logical diagram that shows you the basic flow of how things are done. So let’s look at the overall architecture. So the first step is to create the listener. As a result, whenever you create an ALB, a default listener is automatically added to the application load balancer. You can, however, have multiple listeners that you can add at a later time. So you create a listener, and you create a target group. So these are the target group, and this is a listener. So within the ALB, again, the default target group is already created. So the target group in turn associates with a certain server. So you have target group one, and target group two associates it with a server. Now we can refer to the server based on the instance and the IP address.

We have already seen that. Now the listener intern gets connected with the target group, and the elastic load balancer of the application ALP gets connected with the listeners. Now this is the logical flow of the diagram. The listener is now activated by the conditions. So there can be multiple conditions here. So we have already looked at pathway routing, where there were two conditions. If the URI contains images, it should be assigned to target group 1, and if it contains work, it should be assigned to target group 2. And in turn, each target group has a different set of servers. Target group two has its own different set of servers. So this is the basic logical diagram related to the listeners, related to the target groups, and also related to what conditions are all about.

88. ALB – Conditions & Host Based Routing

Hey everyone, and welcome back to the Kplabs course. So in today’s lecture, we will be discussing the listeners and the target groups. These two concepts are critical to comprehend when it comes to application load balancing and even network load balancers. So let’s go ahead and understand this in much more depth so that our concepts are much more clear. So in simple terms, listeners are basically the processes in the load balancer that check for the connection request. Now, listenership works based on two aspects. One is the protocol, and the second is the ports. So before we begin understanding more, I’ll just give you one example. So if you go into a classic load balancer, you will see there are already listeners, and the listeners work based on protocol and port-based connections. And the same part goes with the application load balancer and the network load balancer as well.

So I hope you already know what listeners are all about. So one of the examples is HTTP and port 80, or maybe HTTP and port 80. So these are various listener configurations that we can have. Now, the new concept is that each listener is associated with the target group. Now, this is not part of the classic load balancer but is part of the next generation load balancers. So you create a listener. So there is a default listener that is added to the application load balancer, and the same listener gets connected with the target group. Now, the target group again gets associated with the instance ID. So what you do is create an application load balancer with a default listener.

Now, the default listener will not have any instances; it connects to the target group, and the target group in turn connects with the instances. So I’ll give you a reference with the classic load balancer again. So, in these circumstances So you see, within the classic load balancer, you have two important tabs. One is the instances tab, where you can add or remove the instances, and the second is the healthcheck tab, where you can configure the health checks. Now all of these tabs—these two tabs, in particular—are within the load balancer console itself. However, for the application load balancer, you can see those two tabs are not here. And the same goes for the network load balancer. Those two tabs are not here. So the question is, where are the instances and the health check-related configuration done? And these configurations are based on target groups. So you configure those two configurations within a target group, and then you attach that target group to the listeners. So within the load balancer, they added a new tab called target group.

So this is the target group. As a result, whenever you create a target group, you can configure the instances in the VPC using the instance IDs. And you also have the protocol and the port-related information that can be configured over here. So this is a logical diagram that shows you the basic flow of how things are done. So let’s look at the overall architecture. So the first step is to create the listener. As a result, whenever you create an ALB, a default listener is automatically added to the application load balancer. You can, however, have multiple listeners that you can add at a later time. So you create a listener, and you create a target group. So these are the target group, and this is a listener. So within the ALB, again, the default target group is already created. So the target group in turn associates with a certain server. So you have target group one, and target group two associates it with a server.

Now we can refer to the server based on the instance and the IP address. We have already seen that. Now the listener intern gets connected with the target group, and the elastic load balancer of the application ALP gets connected with the listeners. Now this is the logical flow of the diagram. The listener is now activated by the conditions. So there can be multiple conditions here. So we have already looked at pathway routing, where there were two conditions. If the URI contains images, it should be assigned to target group 1, and if it contains work, it should be assigned to target group 2. And in turn, each target group has a different set of servers. Target group two has its own different set of servers. So this is the basic logical diagram related to the listeners, related to the target groups, and also related to what conditions are all about.

89. Understanding Network Load Balancer

Hey everyone, and welcome back to the KP Labs course. So in today’s lecture, we’ll be looking into network load balancers. Again, network load balancers are one of the new generation load balancers introduced by AWS. So let us investigate what they are all about. Now, network load balancers, basically, work on the fourth layer of the OSI model. So if you look at the OSI model, you have the physical, you have the data link, you have the network, and you have the transport. So transport is the fourth layer of the OSI model. And if you look into the protocols supported in the fourth layer, which are TCP, UDP, and various others, Because the network load balancer, also known as NLB, is located in the fourth layer, it only supports the protocols found here. So it does not support all the protocols; it supports the TCP protocol. However, one thing that you should also remember is that since it is working on the fourth layer, it will not be able to work based on the upper layer.

So it cannot look into the HTTP request, something similar to what we were looking into with the application load balancer. An application load balancer, unlike a network load balancer, can examine HTTP request headers. Now, the network load balancer basically has a different algorithm. So it works based on the flow hash algorithm, which is based on the combination of IP addresses, destination, port, and the TCP sequence number. So this might be a little confusing. Let me give you one example. So, when it comes to the classic load balancer, they generally work on a round-robin algorithm. So when it comes to load balancing, there are various ways in which it actually works. So there are a lot of algorithms based on which the load balancing happens. As a result, traditional load balancers generally operate on a round-robin basis. So the first request went to the KP lab, and the second request went to My KP Labs. Again. If I refresh, it goes to the KP labs. Again, if I refresh, it goes to the My KP Labs page. So you see, one time it goes to the first server, and the second time it goes to the second server.

So this is the round-robin algorithm. A network load balancer, on the other hand, does not work based on round robin. So each individual TCP connection is routed to a single target for the lifetime of the connection. So we discussed that the flow hash algorithm works based on IP address, destination, and TCP sequence numbers. So I’ll give you one of the examples. So I’ll open up one of the wireshark So if I just open up any random packet over here and go to the transmission control protocol, So this is where the NLP operates. Now, within this, there is a port number; there is a sequence number. So if you will see, you also have the sequence number, and on the Internet protocol, that’s where the IP addresses are defined. So this is the IP address. So the network load balancer works based on the IP address, the destination port, and the TCP sequence number, and then it will choose the appropriate target.

So let me quickly give you an example. So I have a KP Labs network. So this is basically the network load balancer. So you see, the type is network. So when I open the DNS name, it is routed to my internal Kplabs. Now there is a target group that is associated with the KPLABS network. I’ll also show you the target group. So this is the target group, and within the target group, there are two instances. So now if I refresh, you’ll see it went to the same server again; I refresh, it went to the same server again; I refresh, it went to the same server again. And this is because this is the same connection. So this is an individual same-TCP connection, and this is the reason why it is going to the same target group servers. So this does not really work based on a round robin. So AWS has not really documented the exact steps that are part of the flow hash algorithm. They have only said that this is how it works, so that’s it.

So this is like a black box, a kind of scenario that they have given out. So there are a few advantages to a network load balancer. Specifically, when it comes to the application load balancer, First, it has the ability to handle a very volatile workload, and it can actually scale to millions of requests per second. It can handle millions of requests per second. This is one of the great advantages of the network load balancer. So if you have a volatile workload that changes and you expect the load balancer to scale millions of requests per second, a network load balancer is for you. So one important thing to remember, and secondly, which is very interesting, is that it supports the use of static IP addresses. So we can now make use of elastic IP addresses. So load balancing with a dynamic IP address was a very big pain. So now the network load balancer actually supports the static IP addresses. So before we conclude this lecture, I’ll just show you now that the overall concept of the network load balancer and the application load balancer are very similar.

So, if you will see, network load balancers have four taps, and application load balancers have four taps. A network load balancer also has a listener. The application load balancer also has a listener, and both of them have target groups. So if you understand the application load balancer, you will understand the network load balancer as well. Now, the only major difference is that the application load balancer works based on the HTTP protocol. The network load balancer works based on the transport protocol. TCP protocol, I would say, and this is the reason why, within the application layer, you can actually make the routing decision based on the application, the HTTP protocol headers. So this is possible now since network load balancers cannot understand HTTP, cannot see HTTP headers, and therefore cannot make routing decisions based on the HTTP protocol. You see, we cannot really edit this when we add a listener. The only thing on which we can base our work is the TCP and the port number. That is it. We can’t really make those fancy decisions based on host headers, path Uris, and so on. But the good thing is that it can support wallet workloads and have a static IP address.

90. Implementing Network Based Load Balancers

And welcome to the Kplabs course. So in today’s lecture, we will be discussing the implementation aspect of the network load balancer. Now, one of the things that we had discussed earlier is that the network load balancer now supports static IP addresses, and you can associate your elastic IP address within your account with the network load balancer. GreatSo this seems to be quite interesting. So the first thing that we would ideally do before we configure the network load balancer is to make sure that we have elastic IP addresses in place. So in my case, I have a few elastic IP addresses that are not associated, and these are the free elastic IP addresses that I can use.

Perfect. So now the next thing that we’ll do is create a load balancer. This is a type of network. I’ll name this the “network load balancer demo.” It would be Internet-facing for the protocol. Again, as we discussed, it works based on the fourth layer, and it only supports the TCP protocol. So if you go back down, you’ll have to select the availability zone where the load balancer nodes will be created. I select the availability zone, US East 1A, and as soon as I select it, you see that within the elastic IP you have the option to select which elastic IP address you want to associate. So I’ll just select the 34 series, and I’ll go to configure routing. So this is where you have to select the target group. So let’s create a new target group. I’ll say network load, balancer demo, and target type. Again, the protocol could be http, https, or TCP in some cases. So you can select any one of them. So these are the protocols for health checks.

Okay, so I’ll go to the TCP. This is the place where you can enter the threshold value. I’ll just make it the default. So now, within the targets, I’ll select the two instances that are running Kplab One and Kplab Two. I’ll add it to the registered targets, I’ll click on “next review,” and I can go ahead and s, I’ll click oSo our network load balancer is in the process of provisioning. So it takes a little time for the network load balancers to get configured. So let’s just wait for a while for the status to change from provisioning to available. Okay, so it has been close to three to four minutes, and now our network load balancer has changed from provisioning to active. So now let’s look into the DNS and verify whether everything is working as expected. So I’ll paste this within my browser, and you’ll see that things are working as expected. Perfect.

So one thing that we had discussed related to static IPS, so we had assigned, “Let me just refresh the page.” You see, we had assigned this specific IP, and it is now associated with a specific ENI. So, if I proceed to open the eni, So this is the elastic network interface that is associated, and these are the ELB network load terface that is as Now, the associated IP address is 342-34-2129. So let’s do a quick NS lookup and look into what the IP address is that is associated with this load balancer. See 3423-4212 dot 190. And this is the static elastic IP address that is present. Now, you can also have multiple elastic IP addresses. So you have to add multiple availability zones when you create a load balancer. So in our demo session, we had only added one availability zone, but if you add multiple availability zones, then there will be multiple elastic IPS for the same network load balancer. This is it. About the network load balancer, the configuration is quite simple, so if you understand application load balancer-related terminology like listeners and target groups, the same thing applies for the NLB as well. So this is it. I hope this has been informative for you, and I look forward to seeing you in the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!