Amazon AWS Certified Solutions Architect Professional SAP-C02 Topic: Design for New Solutions Part 2
December 16, 2022

46. Implementing Failover Routing Policy

Hey everyone, and welcome back to the Knowledge Pool video series. Now in the earlier lecture we looked into a demo about how a failure based routing would really look like. So, in today’s lectures, we’ll be doing the actual practical and seeing how it works in practice. So I am at my Route 53 console, and let’s do one thing. Let’s create a record set. I’ll say the name would be “Demo Failover.” So this is the subdomain, and this is my main domain.

Now within the value, I’ll put the value of my current server: 162 59, 23. So this is where my NGINX is running perfectly. Now within the routing policy, you will see that there are various types of routing policies that are part of the failover routing policy. This is a primary record set. That means that whenever someone visits this specific URL, the primary answer will be the value that is present over here. So TTL just gives a TTL of 30 seconds. So this is an optimal one when you are using a failover. Now you have to associate it with a health check because Route 53 needs to know when to use primary or secondary. So we’ll associate with the Hell check, click on the health check we’ve created, and I’ll press the Create button. Perfect. So now we have the DemoFailover subdomain, which has been created. Now what we’ll do is create a third endpoint where the failure will actually occur. So it really depends.

Depending on your preferences, you can either create another server or an S3 endpoint. So for our demonstration, we’ll be using the S-3 endpoint. So I’ll click on “Create Bucket.” I’ll go with Demo Fail over the name we’ve given over here. Let me just say okay; I’ll click on “Next.” Select Next. Now that this is going to be a website, I’ll give everyone public read access to this bucket. Go to Next and click on “Create Bucket.” Perfect. So now that the bucket is created, I’ll just upload my index HTML file over here. I’ll go with index.html. I’ll just give it read-only access so that everyone can view it, and then click on upload. So once this file has been uploaded, go to Properties > Static Website Hosting and use this bucket to host your website. So I’ll put index.dot.html over here, and I’ll click on Save. I hope you already know how to do all these things by now. Now, once this is created, just verify if everything is working fine. Perfect. So it seems to be working fine. So now let’s go back to route 53, and this time we’ll create an alias record. So in order to do that, let me just copy the alias target. I’ll click on “create a record set name.” This record sets the same thing that you named earlier, which is demo failover.

The alias will be “yes.” And this is where we’ll be putting the alias target name. One thing to keep in mind is that the alias target name will be the S three-hyphen website. So this is the US one. So you need to replace this with the region that you had configured your S-3 bucket for. So this is important to remember since I have configured it in North Virginia. This is the domain that I’ll be using. So, going back, let’s make a record set as a demo fail over an alias record. And I’ll use the S-3 endpoint, and within the routing policies, it would be a failover. Now this failover record type would be secondary. And you can simply say, “Evaluate target health is no,” because this is S-3, which is already designed to be long-lasting. And I’ll click on “Create.” Okay, it says I have to remove the dot at the end. Click on “Create.” Perfect. So now you will see what I have, which is two record sets. This is the primary one. You see that the failover record type is primary. And if the health check is associated with this record, there is a Kplabs video course health check that we had created earlier. If this health check fails, then route 53 will switch to the secondary. So this is the secondary record, and it will go to the alias target associated with the secondary record set. So let’s try this out. Let’s do a demo. Let me just copy the domain and see if everything works properly.

Perfect. So this is our NGINX page. Let me quickly open up the mobile external, and we’ll stop the NGINX manually. So I say “root at the rate.” Don’t worry; by the time you try, the web server might be deleted anyway. So I’ll click on “no.” Perfect. So let’s do one thing. Let’s stop the engine. System CTS Stop NGINICS is displayed. Perfect. So now the engineer has stopped working. So now what will happen is that the website will stop responding. So you see, the website will stop responding. Now, since this specific record is associated with the health check, which is the Kplapse video course, when this health check fails, route 53 will move from primary to secondary. So this is how it would actually work.

So what we need to do is open up the health check. And as we have discussed, it might take some amount of time, typically 30 seconds, for the health check to fail. So let’s wait a few seconds and then try it out. Perfect. So now the status is unhealthy. So now Route 53 has detected that it is unhealthy. So it will now redirect any subsequent requests to demo failover to the secondary endpoint. So now if I just click on Refresh, what will happen is that route 53 has automatically moved this request to the secondary endpoint. So this is basically how failover routing really works. So I hope this has been informative for you. Go ahead and try this out because these things are very, very important and it will help not only in your exams but in real-world scenarios as well. So this is it for this lecture. I hope this has been informative for you, and I look forward to seeing you in the next lecture.

47. Route53 – Weighted Routing Policy

Hey, Ryan. And welcome back. In today’s video, we’ll be discussing the weighted routing policy on Route 53. Now, a weighted routing policy basically allows us to specify the proportions in which the traffic should be routed to the underlying server. Now, we’ll understand this in the next point, so let’s discuss that. So let’s assume that if we want to send a small portion of the traffic to a newer website theme, then you can specify the weight from one to 99.

So in this case, what would happen is that one source gets 1% of the traffic and the other gets 99% of the traffic. Now, this is very useful because, let’s say, you have created a new theme for the website. Now, you don’t really want all the traffic to be sent to your newer website. It might contain bugs. So you instruct that only 1% of your traffic be directed to the newer website, while the remaining 99% is directed to the older, more stable website. Now, if you get positive feedback from the 1% of traffic, you can start to move it to 5%, 10%, 20% of the traffic, and so on. All right, so the way in which you can tell that, all right, one person out of traffic should be sent is based on weight. So you assign a weight to each record set. The formula in this case is the weight of a specific record divided by the sum of the weights of all the records in that specific record set. So let me quickly show you what exactly this might look like. So, as you can see, I have two record sets available within my Route 53 console.

One of them is weighted dot Zelbora.com, with an IP address of 54 30. And again, you have the same record name, which is weightedzelbora.com, which has a set of 1 to 28, 32. Now, within this record set, if you look at the routing policy, the policy is weighted, and the weight here is two. The weight is now one for the remaining one year. Now, depending upon the weight that you assign to each and every record set, the amount of traffic that will be sent across the domain will vary. So, if I dig quickly, I’ll show you that the first record set is 128. It was 128 the second time the record was set. It was 54 or 30 the third time the record was set. As you can see, traffic is drying up depending on the weight. Make sure that if you create weighted routing and test it within the EC2 instance, the traffic does not suffer primarily as a result of the DNS cache. So that is one important part to remember. Now let me quickly show you how you can create a weighted routing set. So let’s say I’ll call it the “demo weight.” Now you can give it a random value. Let’s say 54, 2054, and in the routing policy, you can select it as weighted. Now, here, you can specify the weight. Let me give you the weight and ID of two. You can just give a random ID; I’ll say it twice.

Now, you can also specify the TTL. Let’s put it at zero, and I’ll create the record in a similar way. I’ll create one more record set. Make sure you give the same name, which is “demo weight.” Give it the number 127, or zero one. Again, this would be type-weighted. The weight would be one, and the ID would be set. You can give the name that you intend to give. I’ll give it the detail of 0 seconds and create it. All right. Now, there are two record sets. Now, each record set has a different weight, as we discussed in your testing. Specifically, if you’re doing it within your EC2 instance, you might not see the records being routed according to the weights, primarily because of the DNS cache. So let me try something; I’ll change the record to demo weight, and it responded with 54. Now, if you’re wondering what this is, this is basically the name server, which is NSY 187. So if you look over here, you have NS Hyphen 187. So this is the name server that we are querying to.So, the next time you run a query, it will be routed to 127001. So you see, the traffic has been routed to multiple record sets. Now, before we continue this video, there is one important point that you should remember. And the point is that if you want to stop sending traffic to the resource, you can change the weight of that specific record set to zero. So in this way, the traffic would be stopped from going to the specific resource.

48. Route53 – Geolocation Routing Policy

Hey everyone, and welcome back. In today’s video, we will be discussing geolocation routing. Now, as the name suggests, geolocation routing allows us to choose the resources based on the geographic location of the users. So basically, this can be understood with a simple example. So let’s say that you might want all the queries from the Asia location.

So all of the DNA are queries that are being routed to the elastic load balancer in the Singapore region from the ratio location. Similarly, if you have a query that is from the US, then you might want to direct those queries to the North Virginia region. So all of those configurations can be achieved with the help of geolocation routing. So let me quickly show you a quick demo on how exactly this might work before we discuss some of the important points. So, this is my route 53 console. Now, within my zone, there are two records. Both of these records now have the same name, “mydemo Kplabs internal.” However, they have different IP addresses. One has the IP address 1077510, while the other has the address 1077520. Now, if you look into the routing policy over here, the routing policy is of type “geolocation,” and within here, we have actually specified the location as the United States, and we have given the idea of two. Similarly, in the other record set, we have the location of India and the idea of one.

So basically, from CLI, I am actually logged into an instance in the North Virginia region here. If we quickly do a Nslookup on my demo Kplabsinternal, you will see that the record set that has been sent in has a response of 1077 520. Because of the record set of 1077 520, this is the reason. We have the location set as the United States. You do have one more record set of 1077, five or ten, but it will only be served if the location is India. So this type of geolocation-based routing is quite useful. so many times it has been observed. I’m sure you also might have observed that whenever you visit a website, you might get a message that the contents of this website are not visible in your country. So that can be easily achieved with the help of geolocation. You can also achieve it at the web server level, like at the NGINX level, but through Route 53 it is much simpler. Now, if you want to create geolocation-based routing, it is very simple. So you make a record set and give it a name; let’s say I call it “geolocation.” You can assign a value to it; I’ll say 1077 530.

Now, in the routing policy, you can select a geolocation, and within the location, you can specify a continent. So you’ve got Africa, Entrance A. Asia, Europe, and North America are the three continents. Oceania. South America. All right, so you can specify by continent; you can even specify by country. So this is the granularity that the geolocation-based routing policy supports. Now, there are certain important considerations that you should remember before you go ahead and implement geolocation routing within your production environment. The first is that geolocation routing works by mapping the database to IP addresses. Now, in such cases of mapping, the results are not always accurate. Now the reason is because there might be certain Internet service providers that might not have any geolocation data associated with them, or it can happen that some ISPs might move their IP block to a different country without notification.

So in such cases, you might run into an issue. So for such cases, route 53 allows us to have a default resource block associated with the geolocation-based routing policy. So let me quickly show you that. So, if you goup within the location, there is a block of default. So this is the default block. So, yes, it is possible to receive a DNS query from an ISP that does not match any of the containers. So in such cases, you want to handle that in an appropriate manner. So for such cases, you can have the location set to “default” and then you can specify one of the IP addresses depending upon the use case that your organisation might have. So this does happen. I would not say it is very common, but you need to make sure that if you are implementing geolocation, you also have the reservation for the default location.

49. Route53 – Multi-Value Answer Routing Policy

Hey everyone, and welcome back. In today’s video, we will be discussing the multivalue answer routing. Now, the multivalue answer routing basically allows us to return multiple values. So these multiple values can be IP addresses in response to a specific DNS query. One fantastic feature of multivalue answer routing is that it allows us to check the health of the resources so that route 53 can respond with information on only healthy resources.

Now again, this might be a little confusing unless and until we go ahead and have a demo. So let’s jump to the practical session and look into how exactly the multivalue answer routing works. So within my CLI, if I quickly do an nslookup on my demo Kplabs internal, you will see that it is basically returning four record sets. So you have IP address 10770-6, followed by zeros seven, eight, and five. So for one specific domain, you have multiple values that are being returned. This is now referred to as the multivalue answer routing. So, let’s take a look at how it might appear on Route 53. So this is my Route 53 console. Now within my hosted zone, you see, I have four records. Now each of these records has the same domain, which is mydemo Kplab internal.

However, each one of them has a different set of IP addresses. So, if I open one of them on the right hand side, you’ll notice that the routing policy is of the type multi-value answer. So, whenever a DNS server requests the domain of mydemo Kplabs’ internal multiple servers, IP addresses are returned in the DNS response. This type of policy helps. Let’s say you have a lot of web servers, and if you want to distribute the traffic, you can make use of a multi-value routing policy. Now again, this is not areplacement to the elastic load balancer. However, at a high level, I hope you have a good understanding of what exactly this is. Now, if you look into the PPT, in the second point we were discussing that multivalued answer routing allows us to check the health of the resources. So it might happen—let’s say that these are four web servers. Now, among these four web servers, one of them is not working.

And what we want is that anytime a DNS request comes in for the domain of “mydemo Kplive internal,” then route 53 should only serve the IAP addresses associated with the web servers whose health check has been passed. It should not serve the IP addresses of the server that has failed the health check. So that can be associated with the health check. So you can associate this multi-value answer with the health check that you create within Route 53. So this is a great feature. So, let me quickly demonstrate how to implement multi-value answer-based routing. So, let us call it multivalue. All right? We’ll give it a value of 190 to 168 1021. Now, within the routing policy, I’ll call it a multivalue answer, and I’ll give it an ID of 1. Now, you can also associate it with a health check. Now, because you’re studying a MultiValue, I’ll simply set this as the default no. All right, so I’ll go ahead and create a record set.

So this is the first record set. Let’s create one more record set. I’ll name it MultiValue. Make sure you name it the same. The value would be 190 to 168 ten-two. The routing policy again would be “multiple value.” The set ID would be two. In a similar case, we’ll do one more. Let’s call this number a 190 to 168 ten-three. And I’ll give it the ID 3 and go ahead and create it. All right, so now there are three records that are part of the multivalued answer. Now, within the easy-to-instance, let’s go ahead and do an ANS lookup; I’ll say multi-value Kplabs Internal. And as you can see, it basically responded with three IP addresses associated with the internal record set of MultiValue Kplabs. Now, one important part to remember is that if you do not associate your multi-value routing policy with a health check, all the health checks will be determined as “no.” So in this case, Route 53 will assume that all of these IP addresses are healthy.

And whenever a DNS query is made, similar to what we had done, Route 53 will respond with all the IP addresses. All right? So again, if you do not associate the multivalue routing policy with any health check, route 53 will assume all the hosts to be healthy, and it will return the IP addresses of all the records associated with the specific routing policy. If you associate it with the health check, and if the health check does not succeed, then Route 53 will not send the IP address of the record whose health check has failed. This concludes the high-level overview of multivalue answer routing. One last point to remember is that Route 53 can respond to a DNS query with up to eight healthy records. So this is one important part to remember.

50. Route53 – Latency Based Routing Policy

Hey everyone, and welcome back. In today’s video, we’ll be discussing the latency routing policy in route 53. Now, I can explain latency-based routing in its simplest form with the help of Google Maps. So, I’m sure most of you have already used Google Maps. So if you want to reach a specific destination, Google Maps can give you multiple paths to reach that destination. Now the path that Google Maps will give you will depend upon the traffic conditions and how fast you can reach there. So it is not like the path that Google Maps has given you today will be the same path that Google Maps might give you tomorrow. Depending on the traffic, it may take a different route. Now, latency routing follows a similar approach. If your application is hosted in multiple AWS regions, we can now improve user performance by serving the request from the AWS region with the lowest latency.

Now, as we have discussed, the latency between the servers might change over time. So there can be certain backbone network changes and certain routing changes that might happen. So depending on which region gives the lowest latency, route 53 will route traffic to that specific region. As an example, a request routed to the Singapore region today may be routed to the India region tomorrow. Again, it depends on the overall latency. So let me quickly show you what exactly this might look like with a quick demo. So for today’s demo, we have a public-hosted zone of Zeelbora.com. So this was one of the domains I thought we used for testing for a while. And for latency-based routing, we need to have a publicly hosted zone. So now, if you look over here, I have two record sets that are available. Both of them have a latency record at Zelbora.com, and you have latency at Zboro.com. Now each of these record sets has a different value. The first one has a record of 35; the second one has 54.

Now, if I click on one of the records set over here, you will see that the routing policy is of type latency. And the second one, the routing policy again, is of type latency. So, whenever a user makes a request to latency. Zillbora.com, the record set that is served is determined by the overall latency. So let’s say that I have one server in the Mumbai region and one server in the US region. Now, if a user tries to connect to latencyzeelboro.com, he’ll be redirected to the US region there. However, if someone from Singapore or someone from Asia tries to connect to the same domain, latencyzeelboro.com, he might be redirected to the nearest server. So let’s quickly look at what exactly that might look like. So I’m in my CLI; let’s quickly do a NSlookup on latency at Zwura.com. So since I am in India currently, I will be redirected to the record set that has the lowest latency from my location. So that is 35,154. All right. Now, let’s do one thing.

So I’ll just open one of the websites that can do a NS lookup. All right? So, I believe, the majority of visitors to this website are from the United States. So, let’s say latency dot Zhora.com. Now, this will work because the current zone is a publicly hosted zone. So let us try it now. And now, as you can see, it gave an address of 54:20. However, the CLI provided the address 35 154. One of the questions that has arisen is how different it is from geolocation. Because even with the geolocation, I can say that any request coming from the US region would go to the AWS resources in the US. However, there is a certain difference. Let’s say you have one server in Mumbai and one server in Singapore. Now, the request would be redirected to one of the servers that has the lowest latency. It is not like that. Let’s say I’m in Mumbai and someone from India is trying to connect to my domain. However, the Mumbai region server has the highest latency. So then the request would be directed to the Singapore region server.

So that is where the latency-based record sets really help. So, before we conclude, let me quickly show you one thing. So I’ll go to EC 2. So generally, if you want to create a latency-based routing policy again, I’m sure that you already know the basic ways in which you can create a routing policy. It is quite simple. Let’s say I’ll say this is my demo latency. Now here you have to give the value, and before you give that, just select the routing policy here as latency. Now, within the EC2 instance, let’s select the Singapore region. And within the Singapore region, I’ll create an elastic IP here. Let’s quickly create an elastic IP, all right? So I’ll copy the elastic IP here and paste it within the value field. And you see, it automatically detected the region associated with the elastic IP. Now, this is something that I really like. This is quite interesting. So you can give it an ID here. You can give it, say, the Singapore region, and then do a create. All right. So now you have my demo latency at Zloro.com. You can create multiple record sets. Depending upon the elastic IP that you might give, the region would be autodetected in the case of elastic IP. So, once you have done that, you can try it out and check whether the latency-based routing works for your practical set.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!