Amazon AWS Certified SysOps Administrator Associate Topic: EC2 High Availability and Scalability Part 2
December 20, 2022

8. [SAA/DVA] Network Load Balancer (NLB) – Hands On

Okay, so let’s practise using our network load balancer. So I’m going to create a new network load balancer, and as you can see, it’s TCP, TLS, and UDP sort of traffic. So let’s create it, and we need to give it a game. So in the case of Demo NLB, the scheme is Internet-facing. The IP address type is IPV4, and then we have to choose VPC Network Mapping. So for this, I’m going to select US East to A, US East to 2, and US East to C. So something interesting is that when you do select an AZ for your NLB, you have an IPV-4 address, and remember, the NLB has fixed IPV-4 addresses. So you can either use an IPD 4 assigned by AWS for this or, if you want to provide your own elastic IP address, you could do it. So, first, create your elastic IP address and then assign it to your NLB.

So this is a particularity of NLB in that, remember, they have fixed IP, so we can assign one fixed IP per availability zone, which we will not do right now. Okay? Then, in terms of a listener, we’re going to be listening on the TCP protocol on port 80. And so this will work because HTTP is dependent on TCP to work. And so when we provide TCP port 80, we know things will work. But as you can see, there is no HTTP option here. Well, because this is not an HTTP-based load balancer. This is a TCP-based load balancer. So when the TCP on port 80 is correct, it will forward, and we need to create a target group specifically for this. So let’s create a new target group for the NLB. So it’s going to be an instance-based target group, and the target group name is going to be my target group NLB demo. Okay, on TCP port 80, and the VPC is the good one. We’re just going to edit the health checks. So we’re going to have three hefty thresholds, and the intro is going to be 10 seconds to go a little bit quicker. Okay, next we have to reach the target. So let’s register our three easy instances, include a spending limit below their spending, and create a target group.

So this target group is specific to myNLB, and we have to see whether or not the targets will be healthy or not. So for now, let’s go back to the NLB. We’ll refresh this and select the target group. So this one, and then we’re done. So we can look at the summary and create our network load balancer. So let’s view our load balancer, which is right here. So now we have an ALB and an NLB. So this is our NLB, and we need to wait for it to be provisioned. So I will pause the video right now. Okay, so my NLB is now provisioned, I can open the URL, press Enter, and, as you can see, things do not work right now. So I have a good idea why they do not work, and I will show you in 1 second. So if we look at our target groups, we have two: the first one is HTTP, and the second one is TCP. Well, the HTTP one had three targets, and they were all healthy, so that was great.

So we have three healthy targets. But if we look at the NLD one in details and go over targets, as you can see, the three targets are unhealthy. And that’s because when you do a TCP-based target group and an NLB, the security group that is taken into account is the security group of the two EC2 instances; there was no security group we created when we created an NLB or when we created a target group that was on TCP. So that means that to solve this problem, we have to edit the security group of our instance.

So if we go back and launch Wizard One, then for inbound rules, we can see that HTTP port 80 is only allowing my first load balancer security group. But we need to create a new rule to just allow HTTP from anywhere for the network load balancer to work. So I will just do it from anywhere. This is necessary for the NLB, and the reason is because the NLB just forwards over the traffic from the clients into the EC2 instance. And so, from an EC2-instance perspective, the traffic doesn’t look like it’s coming from the NLB; it looks like it’s coming from an external client. So once we’ve added this role for the NLB, if we go back into our target group and open this one, we should be seeing very soon that the instance will become healthy.

So let’s just wait a minute. And so now my instances are becoming healthy. So, thanks to the security group change, three of them are now healthy after the refresh. So that means that if I go back to my NLB and refresh, here we go, we get the hello weld from an instance, and then if I keep on refreshing, as you can see, I am redirected to the same instance. So that means that the connection is somewhat sticky, but at least we know that our NLB is working. So to just close this chapter, we just need to go ahead and delete the NLB and the target group. So I will delete the NLB right here. I will delete this target group, which was a demo for the NLB. Yes please. And then for the security group, I’m going to go back into my secure group and I’m going to edit the rules to just remove this one again to go back to the state we were in when we had an application load balancer. So that’s it for this hands-on. I hope you liked it, and I will see you in the next lecture.

9. [SAA/DVA] Elastic Load Balancer – Sticky Sessions

Let’s talk about sticky sessions, or so-called session affinity, for your elastic load balancer. So it’s possible to implement what’s called “stickiness” or “sticky sessions.” And the idea is that the client making two requests to the load balancer will have the same instance on the back end to respond to each request. So the idea is that, for example, you have the ALB with two easy instances and three clients. If the client makes a request and it is routed to the first EC2 instance, it means that when it makes a second request to the load balancer, it will be routed to the same instance, which differs from the normal behaviour in which the application balancer distributes all requests across all EC2 instances. Now for client two, if it goes to the ALD and talks to the second instance, then all the requests will go there, and the same for client three.

Okay, so this is a behaviour that can be enabled for both the classic and application load balancers, and it works. Well, there is a cookie that is sent as part of the request from the clients to the load balancer, and it has stickiness and an expiration date. That means that when the cookie expires, the clients may be redirected to another EC2 instance. The use case for this is to make sure that the user is connected to the same back-end instance in order not to lose his session data, which can contain important information such as the login of the user, for example. However, enabling stickiness may result in an imbalance in the load on the backend for instances if some instances have a very sticky user. Okay, now to go a little bitdeeper, how about the cookie itself? Well, there are two types of cookies that you can have for sticky sessions. The first one is an application-based cookie, and the second one is a duration-based cookie.

So for application-based cookies, well, it’s a custom cookie that is generated by the target, so by your application itself, and you can include any custom attributes you want required by your application. The cookie name must be specified individually for each target group. Okay? And you must not use the following names: AWSALD app or a s Albtg, as they are already reserved for use by the ELD. It could also be an application cookie, which is generated by the load balancer itself, and the cookie name used by the ALB will be AWSALD app. Okay? Now the second type of cookie is a duration-based cookie, which is generated by the load bouncer, and its name is AWSALB for the ALD and AWS ELB for the CLD. Okay? And the idea is that this one will have an expiry based on a specific duration, and the duration is generated by the load balancer itself, okay? Whereas before, well, there is an application-based cookie, so the duration can be specified by the application itself. So that’s how it works, okay, you don’t need to remember the exact names of the cookies or the fact that you have custom and application-based cookies, but you remember there are application-based cookies and duration-based cookies, and they have a specific name, and this will come into play when we discuss cloud fronts.

Okay? So if I look at my load balancer right now and open it in a new tab, as you can see, it goes between my three instances in my load balancer. So that’s perfect. But now I’m going to enable sticky sessions. So to do so, I’m going to go to the target group level, open my target group, and then take action so I can edit the attributes of my target group. And at the bottom, I have stickiness or sticky sessions. And as you can see, we have two types of stickiness available to us. Okay, we have the load balancer-generated cookie, which is a duration type of stickiness. So I can say between 1 second and seven days, or I can have an application-based cookie between 1 second and seven days again. But this time I need to specify the cookie name that is sent by the app to the load balancer. So it could be my custom app cookie, and then this is what the load balancer would be using to perform the stickiness. Okay, so that’s it for stickiness. As we can see, if we just have a load balancer-generated cookie and we set the stickiness generation to be equal to, say, one day, I will save this change.

So now let’s have a look. So I’m going to go and open the debugger as well, so we can have a look at the network and see what happens. So, if you look at the network and then I refresh this page, which, as we can see, I do multiple times, you get access to the same instance. So 7176 is the one who keeps returning, back and back and back. And now what’s going to happen is that when you look at the get request made to the load balancer—I’m very, very sorry for the font size here—I don’t think I can really increase it. But if you go to cookies, as you can see here, there is a response cookie. Okay, that is saying that your cookie expires tomorrow. Here is the path, and here is the value of the cookie. And then, in the request cookie, when the browser makes a request to the loop balancer, it sends again the cookie it has right here. And so, because of the cookie being passed and sent, this is how stickiness works. Okay, so just a little bit of a deeper dive into how stickiness works, but that’s it for this lecture. I hope you liked it. And by the way, to access the Web Developer Tools, you click on Web Developer and then Web Developer Tools, and then just use a source shortcut for that. And it’s the same on Chrome and Firefox. And then you go into the network and get access to the information surrounding your request. Okay? Finally, simply return to your target group, edit the attributes themselves, and disable stickiness to return to normal behavior. We should be good to go. So that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.

10. [SAA/DVA] Elastic Load Balancer – Cross Zone Load Balancing

Okay, so I’m here in the AWS console. If we just go down to services, we can go ahead and click on S 3. And in here, we’ve got my buckets. So a cloud guru 2017. Ryan. And it’s got three different files in here, or objects. And versioning is turned on, which we did in the last lab. So you can see the different versions here. So, what we want to do now is create a new bucket. I’m going to call this my Sydney bucket, “Cloud.” It’s telling me there are no upper cases. 1 second. So, what’s in a bucket of cloud guru Sydney? Something like that Please tell me that somebody hasn’t stolen that. And there we go. So I’m going to put this in Sydney. So this one’s in London, this one’s in Sydney—literally around the entire world. I’m going to go ahead and hit. I’m going to leave everything as default, to be honest. So I’m going to just go ahead and hit create. and we’re going to create this bucket inside Sydney. So far, I haven’t enabled versioning and haven’t done anything. I’ve literally let it become the default bucket, which now exists within the Sydney region.

So let’s go ahead and turn on cross-region replication. We should get an error message because, in order for cross-region replication to work, you need versioning turned on in both buckets. So let’s go ahead and come in here, go over to our bucket properties, and it’s not actually here, sorry. It’s under management; it used to be under properties, but they’ve moved it, and you just click on replication. So it says you haven’t created any cross-region replication rules for this bucket. Let’s go ahead and add a rule. So what do we want to do? Do we want the bucket’s entire contents or just a prefix? And by prefix, they just mean a folder. So you can have subfolders of buckets replicated across You don’t have to fill the entire bucket. I’m going to do the entire bucket. I’ll go with enabled destination. So we go in here and select our destination. It can be buckets in this AWS account or buckets in another AWS account. So do bear that in mind. I’m going to go ahead and use my Sydney bucket. And then it says this bucket doesn’t have versioning enabled. Cross-region replication requires bucket versioning. So enable versioning on this bucket. So there’s our error message. We’re going to go ahead and enable it. And there we go. We can also optionally change the storage class. So we might want to change this to “Standard, Infrequently Accessed.” We would do this especially if we were only using this as a backup device.

So I am going to do that. I’m going to make it standard and infrequently accessed. Go ahead and hit “next.” It’s going to ask us to select an IAM role for this, and we don’t have one, so we’re just going to say, “Create a new role for me.” Go ahead and hit “next.” And there we go. It’s now going to create a role. It’s already enabled versioning on our Sydney bucket. We’ll go ahead and hit save, and we have just enabled replication. So here we go: replicate the contents of this bucket from London to Sydney, my AWS Sydney bucket. And this has created a new role called the “Three CRR Cross Region Replication” role for Cloud Guru blah, blah, blah, blah, blah. So that’s fantastic that it happened. Now I’ve got a question for you. Do you think the objects that are sitting inside our current bucket—so these three objects—have been replicated over to Sydney? Well, let’s go over and have a look. So click in here, go over to our Sydney bucket, and you can see that they’re not in there yet. So it’s only the new objects or any object that we change that will be replicated over it, not the existing objects. So you might be wondering, “How can you copy the contents of one bucket to the contents of another bucket?” The easiest way to do this is via the command line. Okay? So the first thing you’re going to need to do is go to Google and just type in “AWS command line tools” or something like that. And it should be the very first link that’s not paid for. So it’s here. AWS  type in “AWS Click on that. Now you are going to have to go ahead and install this. If you’re a Windows user, you can run the 64-bit or 32-bit Windows installer. Most of you will be using 64-bit, and Mac and Linux users can install it using Pip. So you just type “pip install AWS CLI.” Also, if you’re a Mac user and you don’t have PIP installed, what you can do is just go back to Google and just type in “Mac.” And I believe there is an installer. So here’s the user guide for it. Just click in here. So this will tell you how to do it using Python. But there are standalone installers, and this is actually way easier than just using the bundled installer. same for Windows users. Using the MSI installer is a lot easier.

 After you’ve done that, switch to DOS or a terminal. So right now I’m in my terminal window on Mac; you either have this or you have Dosor PowerShell, depending on what you want to use. So in terms of being able to do this, if you just type in “AWS CLI tools” into Google, you’ll be able to download and install the command-line tools. And then to set it all up, you just have to type in “AWS configure” and you have to pass an access key ID and a secret access key. And you would get those from creating a user in Identity Access Management. Okay, so here I am in the IAM console. You want to go in there, and you probably just want to go ahead and create a new group. I’ve got a group called Admin Access. Let’s just create something similar. So this is where your administrators would go. As a result, AWS.Admin access I’m just going to call these two. Then, in this section, you only want the policy type Administrator Access. And if you can’t see that, just type “administrator” into this box. Go ahead and select that policy and hit Next. That will then create the group, which will give us admin access. So there we go. AWS.Admin access. The next thing we need to do is go ahead and create a user. So my user is called Ryan’s Imac, which you can see here. You can do it, whatever it is. So simply contact this user. Hello, Cloud Gurus! And I actually only want programmatic access. I don’t want console access for this user. Let’s go ahead and configure their permissions. I’m going to chuck them into the group that we just created and scroll down.

Go ahead and hit “next.” Go ahead and create the user. Now, as soon as you create the user, you’re going to get the access key ID and secret access key. Don’t worry for the folks watching at home; I’m going to delete this user as soon as this video is finished. So all we have to do now is return to Terminal. So here I am in my terminal window. I’m just going to type in “AWS configure” again. And so it’s asking me for my access key ID. I would then just copy and paste this access key ID in there. It would then ask me for my secret access key. I’m going to copy and paste my secret access key in there, and then it’s going to ask me for my default region name. So I’m in London. So it’s EU West One. Sorry. EU West Two So just choose whatever the default region is for you, and then go ahead and hit Enter. And then just hit Enter as your default output format. So I’m just going to clear the screen, and if this has all worked, you should just be able to type AWS S 3 and then LS, and that will show us our buckets. Now I’ll tell you how I use this in real life. I’ve recently been starting to get into Bitcoin and, in particular, Ether and the Ethereum blockchain. So I’ve been buying ethers from different providers, and I store those in my wallet.

My wallet is then stored as an encrypted file, which I create using client-side encryption on my Mac. I then store this up in S3, and then I use cross-region replication to replicate it to another region. And I obviously encrypted both buckets at rest as well. So for me, that’s really good. I know that no matter what happens, I’ve got a copy of my wallet somewhere on S Three. And of course, I lock down those buckets very, very well to stop people from being able to access my wallet and stealing my coins. And you can see my two buckets here. So we’ve got a cloud guru in 2017, Ryan, and then my cloud guru, Sydney Bucket. And so all I can do is type “AWSS Three Copy,” and then we’ll do this recursively.

So it’s going to copy everything recursively. And then it’s three like this. So we want “cloud guru 2017” and then hyphenated “Ryan.” This can also be copied to S 3. And then it’s my cloud guru. Sydney bucket. I think I put a typo in there; just make it matter and go ahead and hit Enter. And this will copy the contents of my source bucket to my destination bucket. So there we go. It’s copying it across the world. And now my bucket in Sydney will be an exact copy of my bucket in London. So I’m back on the console. I’m just going to hit refresh after that copying has taken place. and we should hopefully see our objects. So there we go. There are our three objects there. So the last thing I want to show you is what happens if we make an object public inside this bucket. So let’s go here. I can’t remember if this is already public or not. Let’s take a look. if we just click on it. Yes, it is. So this is already public now.Now, let’s go over to my SydneyBucket and take a look. We’ll see if the permissions have copied across. If I click on this, no, the permissions have not yet copied across because I’ve just copied the objects themselves. It’s by default a private bucket. Let’s go and make our other file in London private. And then we’ll make it public again and see if Cross Region Replication copies over the permissions. I’ll return to sentence three. going in here. I’m going to make sure that this is private. So let’s go a little further.

And instead of saying, “Let’s go down here,” it’ll be easier. So we’ll go into permissions here. We’ve got public access. Everyone will say no; hit save. And now let’s update it again and say that access to this object has been granted. This object will have public access. So go ahead and hit save. Let’s go back over to my Sydney bucket. or click here. We’ll click on the link. Okay, so there are two more things I want to show you—or really three. But let’s go in here and let’s go ahead and delete our public TXT first of all. So let’s move on. Go ahead and hit delete, and go ahead and hit delete. Remember that we have versioning enabled, so pressing delete simply places a delete marker over it. This file is still here. Do you think the object in our Sydney bucket will also be deleted? Or do you think that it will still be visible? Let’s go ahead and have a look. So go ahead and click in here to see if it’s been deleted. However, if we click Show, the deleted marker will be replicated in our other region. That’s pretty sweet, right? You’d think that if you go back into Cloud Guru and click here, you’d be taken back to our source bucket. If we delete this delete marker, intuitively, you would think that our source bucket would automatically delete the delete marker, and our destination bucket would automatically delete the delete marker as well. Let’s go ahead and have a look. So if we click on my Sydney bucket, oh no, it hasn’t. Look, it’s still there.

So when you delete an object, that deletion marker is replicated across However, if the adletion marker is removed, it is not replicated across the genome. So I’m not sure why this is. I can’t see what the advantages are, but it is just some interesting behaviour to note. So if we want to restore this bucket, whether it’s an exact copy or not, we’re going to have to go ahead and delete the delete marker. There we go. So now it’s an exact copy again. The last thing we’re going to do is just go back over here, and we’ll go in here. I’m just going to do an update to my public TXT file. So it said, “Hello, cloud gurus, this is public.” And then I’m going to write, “This is the updated version” or something like that. Go ahead and save. Now we’re just going to go in and upload the new file here. So go ahead and add my files. So it was public text. Go ahead and hit “next.” Go ahead and just hit upload. Now that you’ve caught that, you’ve stopped it from being public. So all we have to do is go in and click on it to make it public. And so what we can do now is click in here; we can click in here, and you can see this is the updated version. We just go back to our Sydney bucket. We’ll make sure that it is also public. Just click in here, and it should be public.

There we go. Because it was a new object, the permissions are being replicated from the source to the destination bucket. I’m just going to go back now. Let’s see what happens if we delete this version. Will that then delete aversion from our destination bucket? So in here, we can see that we’ve got two versions. So we’ve got our latest version and the previous version. So if I go ahead and delete it, what do you think will happen? Do you think it will restore it? Will you take it out of the destination bucket? Sorry. Or do you think it will keep it as is? Let’s go ahead and hit delete. And so now we’re back to the previous version. So if I click in here and I actually view it, it’s going to just say, “Hello, Cloud Gurus, this is public.” So we’re back to version one if we go back to S-3, go back to our Sydney bucket, click in here, and actually we just went to versions, and look, it still didn’t delete that version. So, again, I’m not actually sure why they do this, but that’s just the behaviour of cross-region replication. So version control can be a bit of an issue. So if you do revert to a previous bucket in your source bucket, you must also go ahead and make that change in your destination bucket as well. So let’s move on to my exam tips.

11. [SAA/DVA] Elastic Load Balancer – SSL Certificates

Now let’s talk about cross-zone balancing. So let’s take an example of a very unbalanced situation that will illustrate the point. So we have two availability zones, and the first one has a load balancer with two easy instances, and the second one also has a load balancer. So a load balancer instance obviously has eight instances, and the load balancer instances are part of the same more general load balancer. The client is accessing these load balancers. And so, with cross-zone load balancing, each load balancer instance will distribute load evenly across all registered instances in all availability zones. So the client itself is sending 50% of the traffic to the first ALB instance and 50% of the traffic to the other ALB instance, but each ALB is going to redirect that traffic across all ten ECS instances, regardless of which availability zone they’re in. This is why it’s called “cross-zone load balancing.” So if we take a look at the first ALB, sorry, instance, it’s going to send 10% of the traffic it received to all these instances because we have ten instances, so each of them gets 10% of the traffic.

Similarly, the first ALB instance will send 10% of traffic to all of these instances. So in this example, with cross-zone balancing, we are distributing the traffic evenly across all EC two instances, and that may be something you want to have or not. But at least this behaviour is available to you. The second behaviour that’s available to you is to not have cross-zone balancing. So we take the same example, but without cross-zone balancing the requests, the requests are distributed in the instances of the node of the elastic load balancer, so in this example, what we have is that the client sends 50% of the traffic to the first AZ and 50% of the traffic to the second AZ, okay? But the first ALB instance is going to send the traffic only to the EC-2 instances in its region, so that means that each EC-2 instance in this sorry availability zone is going to get 25% of the traffic overall. Okay? So it’s a 50/50 split.

And the one on the right will again divide the traffic it receives into the two instances that are registered within its AZ, so AZ two, so we can see here that traffic is contained within each AZ without cross load balancing. But if there is an imbalanced number of EC instances in each AZ, then some EC instances in a specific AZ will receive more traffic than others. It’s just an option to know about. There’s no right or wrong answer. It depends on the use case, obviously. So for the ALB, cross-zone load balancing is always on, and you cannot disable it. And usually, when data goes from one AZ to another, in a way, you have to pay for it. But because it’s always on and you disable it, you don’t pay for inter-AZ data transfers for the network load balancer. It is now disabled by default, but you must pay a fee if you want to enable cross-zone load balancing, in which case you must pay for data transfers between availability zones and, finally, the classical balancer. So if you create it through the console, it’s enabled by default. If you do it through the CLI or an API, it’s disabled by default, and if you do enable it, you’re not paying for charges for data going across availability zones. Okay? So all load balancers have this capability, and the ALB is always on by default. You need to enable the NLB, and you’re going to pay for it. And through the CLB, you can enable it, and you’re not going to pay for it. Okay? So just for the sake of this hands-on, I’ve created a CLB and NLB, but you don’t have to do it. I’m just going to show you the settings.

So for the classic load balancer right here, if I scroll all the way down, we have cross-zone load balancing that is disabled by default, but I can change that setting and click on enable it, and this will evenly distribute the traffic across all my EasyToInstances that are registered for this classic load balancer. So here we go, and I’m not going to pay for it. For the ALB, it’s always enabled by default. So there is no setting to change the cross-zone load balancing because it’s always on by default for the application load balancer and for the NLB. I scroll down and I see that the cross-zone load balancing is disabled by default, and I will say, “Okay, I want to enable it.” But here it says, by the way, you will pay regional data transfer charges because crop zone balancing is enabled, and for the NLB, this is something they will charge you for. Okay, you will click on Save, and crop balancing will be enabled, but you will have to pay for it. That’s all I wanted to show you. So you can just skip the Samsung and I’ll see you in the next lecture. 

12. [SAA/DVA] Elastic Load Balancer – Connection Draining

Okay, so now let’s talk about SSL and TLS certificates. So this is a dumbed-down version of how this works. This is obviously way more complicated, but I want to introduce you to the concepts in case you don’t know them. And even if you do know SSL and TLS, please watch this lecture. I’m going to talk about SNI and the integrations with load balancers. So bear with me, please. So an SSL certificate allows the traffic between your clients and your load balancer to be encrypted while in transit. This is called inflight encryption.

So that means the data, as it goes through a network, is going to be encrypted and is only going to be able to be decrypted by the sender and the receiver. SSL stands for Secure Sockets Layer, and it’s used to encrypt connections. And TLS is the newer version of SSL, and it refers to Transport Layer Security. But the thing is, nowadays TLS certificates are the ones that are mainly used, but people, including myself, will still refer to this as SSL. So I’m making a mistake, but I’m making it on  purpose. OkaSo it’s better to say a TLS certificate than SSL certificates, but for many reasons, I’m still going to say SSL because it’s easier to understand. So public SSL certificates are issued by certificate authorities, and they include something like Comodo, Symantec, GoDaddy, Global Science, Digital, Cert, Let’s Encrypt, and so on. And using this public SSL certificate attached to our little bouncer, we’re able to encrypt the connection between the clients and the little bouncer. So, whenever you visit a website, such as Google.com or any other, and see a green luck, it means that your traffic is encrypted. And if traffic is not encrypted, you’ll see a sign saying, “Hey, traffic is not encrypted.” It is not secure to enter your credit card information or login information. So the SSL certificates have an expiration date that you set, and they must be renewed regularly to make sure that they’re authentic. Okay, so how does it work from a load balancer perspective?

So users connect over HTTP, and it’s because it’s using SSL certificates, it’s encrypted, it’s secure, and it connects over the public Internet to your load balancer. Your load balancer also performs SSL Certificate Termination internally. And in the back end, it can talk to your ECQ instance using HTTP. So the traffic is not encrypted, but it goes over your VPC, which is your private traffic network, and that is somewhat secure. So the load balancer will load an X.509 certificate, which is called the CSSL or TLS server certificates. ACM or AWS certificates can be used to manage these SSL certificates in AWS. Manager. So we won’t be viewing ACM in that lecture, but rather getting a sense of what it is. Now, you can also upload your own certificates to ACM if you want to. When you configure an HTTP listener, you must also configure an HTTPS listener and specify a default certificate. Then you can add an optional list of certs to support multiple domains. And clients can use something called SNI, or ServerName Indication, to specify the host name they reach. Now, don’t worry; I’m going to explain what SNI is in details on the next slide, because it is really, really important for you to understand what it means. That means that you can also finally, for HTTP, set a specific security policy if you want to support older versions of SSL and TLS, called “legacy clients.” Okay, so let’s talk about SNI because it is so important. Sniff solves a very important problem: how do you load multiple SSL certificates onto one web server in order for that web server to serve multiple websites? And there’s a newer protocol that now requires the client to indicate the host name of the target server in the initial SSL handshake. So the client will say, “I want to connect to this website,” and the server will know what certificates to load. And so this is a newer protocol, and this is something new. Not every client supports this. So it only works when you use the application of the load balancer and the network load balancer. So the newer generations, or “cloud front,” as we’ll define later in this course. And it does not work when you use the classic load balancer because it is an older generation. So anytime you see multiple SSL certificates on your load balancer, think ALB or NLB.

 So, as a diagram, what does it look like? We have our ALB here, and we have two target groups. The first one is www.mycorp.com, and the second one is domain1.com, for example. So the ALB will be routing to these target groups based on some rules, and the rules may be directly linked in this case to the host name. So the ALB will have two SSL certificates: domain one, example.com, and domain two, mycorp.com, which correspond to the corresponding target groups. Now, the client connects to our ALBand and says, “I would like www.mycorp.com,” and that is part of the Server Name Indication. And the ALB says, “Okay, I’ve seen that you want mycorp.com; let me use the correct SSL certificates to fill that request.” So it’s going to take the right SSL certificates, encrypt the traffic, and then, thanks to the rules, it’s going to know to redirect to the correct target group, mikecorp.com. And obviously, if you have another client connecting to your LB, for example, Dominion, then you will be able to pull the right SSL certificate again and connect to the right target group. So using SNI, or server name indication, you are able to have multiple target groups for different websites using different SSL ication, you are able tSo finally, what is supported for SSL certificates? So, according to the traditional load balancer, you can only support one SSL certificate.

And if you want multiple host names with multiple SSL certificates, the best way is to use multiple Classic Load Balancers. For ALB, the V, you can support multiple listeners with multiple SSL certificates. and that’s a great part of it. And it uses SNI to make it work. And we just saw what it is. And for the MLB, or Network Load Balancer, it supports, again, multiple listeners with multiple SSL certificates, and it will use SNI again to make it work. Okay, so let’s look at the classic load balancer. And if we go to listeners here, I’m able to edit and add an HTTP listener, and I need to set up a cipher, which is the protocol we want to support, and that’s a security cipher. And then I need to set up an SSL certificate, and I can import it directly and have it manually encoded, or I can choose the certificates from ACM, which is Amazon Certificate Manager, but we don’t have one, so we can’t use it. But I want to show you that, yes, you can set up an HTTPS certificate here, and it supports only one SSL certificate.

So that is for the Classic Load Balancer. Next, for the application load balancer, we can add another listener as well. So we’ll say, “Okay, this listener is HTTPS, and the default action is to forward this message to our target group.” Excellent. Now we set a security policy, this one, and then we say, “What is the default SSL certificate?” So is it from SEM, IAM, or import? Okay, and we can choose the certificate we want. But the idea is that now for each rule, we can have a different SSL certificate. And that would allow us, using servername indication (SNI), to have multiple SSL certificates for different target groups. So, really, really good. But again, I’m not going to do it because we don’t have the right certificates in place. And for the NLB listener, again, you can add a listener, and this one can be TLS for secure TCP. And again, the default SSL certificates we have can be imported from ACM, IAM, or written manually. All right, so that’s it for this lecture. So, just to show you how the settings work, So what I need to remember is that the CLB is the old way of doing things. It does not support SNI, and ALB and LB do support SNI and multiple SSL certificates. All right, that’s it. I will see you at the next lecture.

13. Elastic Load Balancer – Health Checks

So let’s talk about a feature that can come up in the exam on your load balancers, and that is connection draining. So it has two different names depending on which load balancer you’re considering. If you’re using a CLB or a classic load balancer, then it’s actually called connection draining. But if you have a target group, it’s called a deregistration delay. And that is obviously applicable for application balancers and network load balancers.

So what is this connection draining? and I’ll just call it connection draining. It is now time to complete in-flight requests while the instance is deregistered or unhealthy. And the idea is that it will allow the instance to just shut down anything it was doing before being deregistered. And so the idea is that the ELV, as soon as an instance is in draining mode, will stop sending new requests to the instance because it is being deregistered. So let’s have a look at the diagram to understand it better. So we have our ELB, and we have three easy instances behind it. And our users are accessing For example, our first EC2 instance through the ELB It turns out that our EC-2 instance may be being terminated or is unhealthy. So it’s going to go into draining mode. As a result, existing connections will be held during the connection draining mode until the connection draining period is completed. And by default, that is 300 seconds. As a result, any new connections made by users to the ELB will be redirected to other EC2 instances that are being registered or are already registered to your ELB. So it includes the second EC instance or the third ECU instance in this example. And so your deregistration delay is going to be 300 seconds by default, but you can set it anywhere between 1 second and 3600 seconds, which is 1 hour.

And you can also disable it by setting the value to zero. So let’s just discuss the value a little bit. So if your requests are quite short, if you have a web application that does very short requests, maybe one to five seconds, then you want to set your registration delay to something that’s going to be quite low, maybe 10 to 20 seconds, because you don’t expect any connection or any request to last any longer than 20 seconds. However, if your EC2 instances are very slow to respond, perhaps taking minutes because they have a lot of data processing to do, you may want to increase your connection draining to allow these requests that are already in flight to be completed. And obviously, if you don’t want that behaviour at all, then you can just disable it, set the value to zero, and in the event that a connection is dropped while your EC2 instance is being killed, then your users will retrieve an error. And maybe it’s the role of your users to just retry that request up until it succeeds by being redirected to a new EC2 instance. Okay, so in the exam, the connection draining concept can come up, and you just need to understand it at a high level. So I hope that was helpful, and I will see you in the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!