Google Associate Cloud Engineer Topic: Load Balancing in Google Cloud Platform
December 19, 2022

1. Step 00 – Getting Started with Cloud Load Balancing

Welcome back. In the previous section, we created a number of VM instances. We’d like to load balance all of those instances now. How do you do that? That’s where we’d be going for cloud load balancing. In this section, let’s look at cloud load balancing in depth. A cloud load balancer distributes user traffic across instances of an application in a single region or multiple regions. So if you have multiple VM instances in multiple zones and multiple regions, you can distribute traffic between them using a cloud load balancer. This is a fully distributed, software-defined managed service. The Google Cloud Platform would ensure that this service is highly available and replicated as needed. Some of the important features of cloud load balancing are, number one, health Check any load balancer you would want to create a health check for because that’s the only way a load balancer would be able to check if an instance is healthy or not. So using health checks, cloud load balancing can route only to healthy instances. This would help you recover from failures very quickly. Auto scaling is another important feature of cloud load balancing. Based on the number of requests that are coming in, cloud load balancing would automatically scale. The other important feature is single-cast IP. The cloud load balancer provides you with a single any cast IP. And this IP can be used to receive traffic from multiple regions around the world.

So you can serve global traffic using this specific IP address. In addition, cloud load balancing also supports internal load balancing. So, if you need load balancing for applications deployed within a specific network, you can do so with your cloud load balancing. The important reason why we use cloud load balancing is because it enables high availability. Even if the instances go down and come back up, cloud load balancing ensures that the users are not affected. Cloud load balancing will distribute traffic only between the healthy instances. This also allows for auto-scaling because cloud load balancing is auto-scaled based on the number of requests, and the back-end services, such as compute engines and instance groups, would go down and come back up over time. By having a cloud load balancer, we can actually create a very loosely coupled architecture. The other important thing is resilience. because we have health checks configured. The load balancer can detect if any of the instances are down, and it would distribute traffic only to the healthy instances. One other feature of load balancers that ensures high availability is the fact that your load balancer can distribute load to instances in multiple zones and multiple regions. In the previous step, we got a high-level overview of cloud load balancing. In the next step, let’s understand some of the important terminology associated with load balancing.

2. Step 01 – Understanding HTTP, HTTPS, UDP and TCP Protocols

Welcome back. Whenever we talk about virtual servers, load balancers come to mind. Whenever we talk about communication between two different systems, we talk about protocols. We talk about HTTP, HTTPS, TCP, TLS, and UDP. What are these protocols? Let’s get a quick overview of all these protocols. If you already understand what is meant by HTTP, HTTPS, TCPTLS, and UDP, you can safely skip this step. But if you don’t really understand them, I would recommend that you spend some time watching this step. When two systems communicate with one another, the communication takes place across multiple layers. The important layers are the application layer, the transport layer, and the network layer. Why do we need these layers? Let’s look at it right now.

Computers basically use a number of protocols to communicate, just like humans use languages to talk. So I can talk in English or Telugu, which is my mother tongue. Each of these languages has a similar grammar and syntax similar to that. Computers use protocols. So if you want two systems to talk to each other, they need to understand the common language, which is called a protocol. So HTTP, HTTPS, TCP, TLS, and UDP are different protocols at different layers. So there are multiple layers and multiple protocols. The network layer is responsible for transferring the bits and bytes. But now we talk about computers. It’s zeros and ones. Who handles the zeros and ones? That’s the network layer. It’s responsible for transferring the bits and bytes. Whenever we talk about two systems talking to each other, this communication happens over a network. There might be a number of systems in between these two systems that would help them communicate with each other.

If we are communicating over the Internet, then the communication would happen over a number of routers. The next important layer is the transport layer. The transport layer is responsible for ensuring that the bits and bytes, the zeros and ones, are transferred properly. As we discussed, there are a number of systems in between. And if one of the systems in between, one of the routers in between, goes down or something happens due to which you lose data, the transport layer is responsible for ensuring that the bits and bytes that this system sends are properly received by this system. The next important layer is the application layer. The application layer is at the top of the chain. The application layer is where we build most of our applications. We make REST API calls; we send emails.

All of them work at the application layer. The thing is, each layer makes use of the layers beneath it. So even though we are using HTTP and HTTPS, which fall in the application layer, when we use the application layer, we are actually making use of the layers beneath it as well. So that’s all beneath the hood, and you would not need to really worry about it most of the time. Another interesting thing is the fact that not all applications would communicate at the application layer. If your application needs high performance, let’s say you are doing a streaming application or redeveloping a gaming application, you want high performance. And in those kinds of scenarios, you might even skip one of the layers. Let’s say you can skip the application layer and directly talk to the transport layer. So, in summary, there are multiple layers that are involved when true systems talk to each other. The network layer is responsible for bits and bytes. The transport layer is responsible for ensuring that the bits and bytes are transferred properly. And the application layer is typically where we build most of our applications.

Now let’s look at some of the protocols that are used at each of these layers. The network layer uses a protocol called IP (Internet Protocol). This is a protocol that is used to transfer bytes and bits. And the network protocol is very unreliable because when this sends some information, it goes over a network, and the data might get lost over the network. And that’s why we have the transport layer. In the transport layer, there are two important protocols. The first one is the transmission control protocol. The transmission control protocol gives high importance to reliability. If I’m sending ten bits of information, it would check that on the second system, the ten bytes arrived as they are and in the right order. The next important protocol to understand is the TLS (transport layer security) protocol. It is very, very similar to TCP.

However, because data is going over the network, you want to make sure that it is encrypted. So TLS is actually a secure form of TCP. So it’s sort of like TCP plus plus. The other important protocol is the UDP user datagram protocol. UDP is an alternative to TCP, and it gives high importance to performance over reliability. If I’m sending a set of bytes over a network, and I’m okay with losing some of those bytes, there are applications where you would want to send 100 bytes over the network, and it’s okay to receive only 98 of them on the other side. But for these applications, performance is very, very important. You want to be able to receive these bytes immediately on the other side. Video streaming applications are a good example of this. In the video, even if a few bytes are misplaced, that’s okay; we don’t really need to worry about it. As long as the majority of the picture appears good and it’s streaming fast, most people would be okay with it. User Datagram Protocol is used in those scenarios. And the next layer is the application layer. On the application layer, we are very, very familiar with the HTTP protocol (Hypertext Transfer Protocol). It helps you build web applications and things like this. This is a stateless request-response cycle. And HTTPS is a secure version of HTTP. You don’t want to communicate over an unprotected network. And HTTPS helps you secure HTTP communication by having certificates. It uses certificates that are installed on the servers to ensure that the communication between the two systems is secure. There are a lot of other protocols that are present at the application layer. things like SMTP, which is used for emails; things like FTP, which is used for file transfers; and a wide variety of them. In summary, most applications typically communicate at the application layer. So we are talking about HTTP, HTTPS, and the protocols and the protocols and the protocols and the protocols and the protocols and the protocols and the Applications using the application layer are actually making use of the layers beneath them. Typically, in the transport layer, they use TCP or TLS.

Whenever we are sending a web application request response, we want to make sure that the bytes arrive safely and accurately. On the other side, that’s the reason we use the Transmission Control Protocol. HTTP web applications would be using TLS, which is the encrypted transmission control protocol. However, some of the applications that need high performance directly talk to the transport layer. The examples that we talked about are gaming applications or live video streaming applications that can use the UDP protocol because, for them, 100% reliability is not important. For them, they can sacrifice reliability for the sake of performance. Now, the objective of this specific video was to help you understand the big picture. You don’t really need to understand the bits and bytes of all this stuff. If this gives you a big picture overview of how things work, you should be able to understand whatever we’ll be discussing in the next few steps. Things like this are typically not discussed in all the courses, but I thought having a big picture overview of what happens under the hood would help you to understand whatever is being discussed in the next steps very easily.

3. Step 02 – Creating a Load Balancer in GCP – Google Cloud Platform

Welcome back. In this step, let’s create our first load balancer. We want to create a load balancer and distribute the load across all the instances that are currently part of our “managed instance group.” I’d go to load balancing; just type in “load balancing,” and this will bring up load balancing. Load balancing is actually part of the network services, and over here we can create the new load balancer. When you go to create a new load balancer, you’ll see the different options that you have to create a load balancer. You have the option of creating an HTTP load balancer, a TCP load balancer, or a UDP load balancer.

So if you need high-performance load balancing without reliability, you can go for UDP load balancing. So this is layer 4 load balancing for applications that rely on the UDP protocol. And over here, what we can use is the UDP load balancer. The UDP load balancer can either be Internet-facing or internal. As a result, it can be available on a specific internal network or exposed to an external internet audience. However, as you can see here, UDP loadbalancing is only available in a single region. So your UDP load balancer is a single-region load balancer. Now the next one is TCP load balancing. So this is a layer for load balancing or proxy, so you can see that it has both load balancing options and proxy options. layer for load balancing or proxying for applications that rely on the TCP/SSL protocol. Again, this is a layer for applications that require high speed, such as gaming applications.

And over here, the load balancing options that you have are TCP load balancers, SSL proxy load balancers, and TCP proxy load balancers. These load balancers can also be external or internal, single or multiregional. So these load balancers can be global load balancers. And the last but most frequently used option is HTTP load balancing. This is used to create your Rest API web applications. All of them use HTTP load balancing. This is layer seven, load balancing for HTTP and HTTPS applications. Similarly to TCP load balancing, HTTP load balancing can be Internet-facing or internal, as well as single or multiregional. What you want to create here is HTTP load balancing. So let’s go ahead and say “create start configuration,” and what we want is an Internet-facing load balancer. So from the Internet to my VMs, if you want something that is internal to a specific network, you can say only between my VMs. But for now, we would choose “Internet” over “My VMs” and say “continue.” And over here, you can configure all the important things that are configured as part of a load balancer. The first thing is to create a name.

So, as stated here, my HTTP load balancer is made up of three parts. One is the back-end configuration. A back-end service directs incoming traffic to an instance group. You can also use a storage bucket to serve your content. A little later in the course, we’ll be talking about cloud storage, and there you can create storage buckets. You can use HTTP load balancing to direct traffic to storage buckets as well. The use case that we are looking at right now is to direct incoming traffic to an instance group. So we can create an instance group on the back end. If you are using microservice architectures and if you have a number of microservices, then you can create a number of back ends using different instance groups for each of the microservices. And a single load balancer can also load balance between multiple back-end services.

After you’ve configured the back end, you’ll need to configure host and path rules. The host and path rules define how your traffic will be redirected to your back-end service. If you have multiple back-end services, you can say that a request for a specific path needs to go to a specific back-end service. You can also specify a default backend service by saying this is the default. If none of the rules match, then by default it should go to this specific back-end service. In addition to the backend configuration and the host and path rules, you can configure the front end configuration. That’s basically the IP address of the load balancer and the protocol and port that the load balancer would be using to serve traffic. This is the specific IP address to which your users, who are basically the clients, would send the request to. So let’s go ahead and configure the backend configuration first. So, backend configuration, what do we want? We want to configure services; do you want to configure buckets? We would want to configure services, right? We would want to configure back-end services, and we want to create a back-end service. What’s the back-end service that we want to configure? Let’s create an instance. I’ll call this my instance group back-end service. So it’s an instance group; that’s the back-end type, and the protocol that we want to use is HTTP, right? We want HTTP, and let’s have a timeout of 30 seconds. That looks good.

And what is the instance group? The instance group that we want is my managed instance group. Let’s choose that. Let’s configure port 80, because that’s where we would want to serve traffic from. We are going to use HTTP. HTTP is 80. You can balance based on utilisation or rate, and you can configure how much utilisation you want in here. I’ll take the default for those settings. You can also configure how much capacity you want. If you want to have your instances operate at 80% utilization, then you can set it to 80%. That would give you a buffer so that even if one of the instances fails, The other instances can handle the traffic. I’ll leave it at 100%, which is the default, and I’ll sit down. In addition to that, we’ll also configure a health check. So let’s go ahead and use the same health check that we created earlier when we were creating the Manage Instances group. We also created a health check, and that’s the health check that appears here. So that’s the health check I’ll make use of. The default is to enable logging. And you can also configure the sample rate. to – – – – a – – – a. – – – a. a a – – – a a a a – – a So you can specify a value between zero and one. 0.1 in here would mean that I would want to sample 10% of the requests.

One would mean that I would want to sample all the requests. In addition, you can also configure a security policy. Cloud Armor is one of the services in GCP that provides security for your load balancer. Let’s not really worry about the rest of the configuration, and I’ll say “create.” So this would create the back-end configuration. Because we only have one back end, I can now go to the host and path rules. The host and path rules are not really important. We want to redirect all incoming requests to the same back end. But if, let’s say, you have multiple services, a microservices architecture, and multiple back-end services that are configured, then you can actually configure multiple rules. So then you can click Add Host and Path Rule and say if the request is coming from a specific host.

So let’s say the request is coming from a specific host on a specific path. Let’s say it’s coming to a path called Microservice A. I would want to go to a specific back end. I would want to configure a specific back end. And let’s say if the request is coming into Slash Microservice B, then I would want to redirect it to a different backend. You can actually configure things like that in here, but for now we don’t really need to do that. I’ll go to the right hand side and say close and all this, and I’ll stay with the default rule that is present in here, which is basically all requests; any unmatched request from the rules would be redirected to this specific back end. The next thing that you can configure is the front-end configuration. You can configure a name, which is optional, and you can configure the protocol. We are going to use the HTTP protocol; that’s fine. And you can configure the network service tier.

We are going to use Premium, which is the default. We are going to use IPV4 addresses. Let’s stick with FMLRL addresses for now, let’s use port 80, and let’s click Done. As you can see here, you can have multiple front-end configurations for the same load balancer. So you can configure your load balancer to receive traffic from a variety of sources in a variety of ways. And after that, you can go ahead and review and finalize. So you can see the back-end services that are configured here. In my managed instance group, you can see that we have chosen port 80, that we have the default host and path rules, and that we have also configured the front end. Let’s go ahead and say “create” in here, and this would create our load balancer. The creation of the load balancer would take a long time. Typically, this takes somewhere between eight and ten minutes. What I recommend you do is take a break, grab a coffee, and I’ll see you later.

4. Step 03 – Understanding Cloud Load Balancing Terminology in GCP

Welcome back into the fold. Let’s review some of the important terminology that is associated with cloud load balancing. Let’s consider this example. I have a number of instances that were created using a managed instance group, and I would like to route traffic between them using a cloud load balancer. And there are users who are sending the request to cloud load balancing. Considering this scenario, let’s look at the terminology. Let’s start with the back end. The back end is a group of endpoints that receive traffic from a cloud load balancer. As an example, consider instance groups. So the back end is nothing but our managed instance group that we created.

So this is the set of instances that are part of our managed instance group. When using micro services architecture, you can have multiple backbends served by a single load balancer. You can create a back end for each of your micro services. So you can create a micro service. aback end service micro services-based backend service The next important term is “front end.” The front end is nothing but the address at which your cloud load balancer is available. How can users send requests to the cloud load balancer? They need to specify an IP address, and they would need to use a specific protocol and port to reach the cloud load balancer. So this IP address is the frontend IP for your client requests. If you are using SSL, then a certificate is assigned to your cloud load balancer. In addition to the back end and the front end, you can also configure host and path rules. This is specific to HTTP load balancing, where you can actually define rules for redirecting traffic to different backends. As we discussed earlier, you can have multiple backend services, for example, serving different microservices based on the path.

For example, if a request arrives after 28 minutes rather than after 28 minutes, you can route it to different microservices. You can also redirect based on the host. So if the request is coming in at A in 8 minutes versus B in 28 minutes, you can redirect to different back-end services. You can also go by different HTTP headers, like authorization headers, or you can go by the method—the specific HTTP method—that is used to send the request. One more important concept that you need to understand about load balancing is SSL termination, or SSL offloading. This can also be TLS termination or TLS offloading. If you are using Layer 7, then you are doing SSL termination or SSL offloading. If you are using Layer 4 and you’re using security, then you are doing TLS termination or TLS offloading. Let’s assume a scenario where you are exposing an application on the internet. Then the client to the load balancer is going over the Internet, and for this traffic, HTTP is recommended. You want all the communication that is happening here to be secure, and you can achieve that by assigning an SSL certificate to your cloud load balancer.

Now the traffic between the load balancer and the VM instance is going through the Google internal network, and therefore HTTP is okay. While HTTPS is preferred, it’s okay to use HTTP for this specific part of the journey. And that’s the reason why SSL termination or offloading is important. So if you’re using SSL termination, that basically means you are communicating on layer seven using HTTP. Then the client goes to the load balancer. Communication happens using HTTP and the load balancer for the VM instance, communication happens using Http.So SSL, or the secure part of communication, is being terminated at the load balancer, and that’s the reason why this is called SSL termination or SSL offloading. If you are using a layer for communication, this is called TLS termination or offloading. And between the client and load balancer, TLS is used. And between the load balancer and the VM instance, we have unsecured communication, which is through TCP. The advantage of going for SSL cell termination is that you are reducing the load on the instances. Instances don’t really need to worry about HTTP or TLS; they can directly handle the request. In this step, we understood some of the important concepts related to load balancing. I’ll see you in the following step. 

5. Step 04 – Exploring the Load Balancer in GCP – Google Cloud Platform

Welcome back. The creation of the load balancer took about ten minutes, and at the end of it, I could see that my load balancer was ready. So if I go and look at the back ends, I can see a back end present in here: my instance group back end service. And you can also see that there is a front end, which is where the traffic is received from. And if I go to the load balancer and click the load balancer, then I can see the port on which the load balancer is running. So let’s pick up this port and go and execute the request against it. So you can see that as I execute the request, the load is balanced between the existing instances. Now, if you have any problem with this, the things that you need to check are: one, give sufficient time for your load balancer to load up.

The load balancer’s initialization would take about 10 to 15 minutes. So give it a few minutes and then check afterwards as well. The thing you can check is the status of your back end, so you can see that for me, healthy is three by three, so all my instances are healthy. If you encounter a problem, check the VM instances’ external IP addresses to ensure that they are operational. When I go into the load balancer, I can see the front-end details, the host and path details, and the back-end, and you can also see the health status of the back-end services. You can also look at the monitoring. So if you go to the monitoring, you will be able to see data around the load balancer. Right now, I don’t see any data because I’ve just created the load balancer. If you give it a little bit of time, you should also be able to see the monitoring data for this specific load balancer. Now let’s quickly review the hierarchy. So we have created a load balancer to load balance between a set of instances. How are these instances created? Let’s go back to our instance template. So we created an instance template defining how a specific VM instance should look like, and that VM instance is my instance template with a custom image.

And using this instance template, we have created an instance group. What is the instance group that we have created? The instance group that we have created is my managed instance group, and this instance group uses this specific template. And in the instance group, we are configuring how many instances we want to have running all the time. And after that, we created a load balancer that is tied to this specific instance group. How did we tie the load balancer to this specific instance group? We created a back end; we created a back end service that is mapped to this specific instance group, and the load balancer, whenever it receives traffic on the front end, automatically distributes it to the back end. And that’s how you can see that when I am actually refreshing this multiple times, you will see that the request is being sent to multiple VM instances. It is very, very important to understand the entire hierarchy as far as the exam is concerned. So, what I recommend is that you spend a little more time thinking about what we’ve set up here. 

6. Step 05 – Choosing a Load Balancer in GCP – Google Cloud Platform

Welcome back on the scene. Let’s see what are the different load balancing options that are provided by Google Cloud, and how do you choose between them? Whenever we talk about a load balancer, there are a few important things that we would need to decide. One is: are we serving external customers? That’s basically: are we exposing the application on the Internet, or do we have only internal user accounts? If I have external users, there are different kinds of load balancers that are needed. If I have internal traffic, there are different types of load balancers that we would need to make use of. Let’s start with external traffic.

Let’s say my application or my load balancer is going to be exposed to the Internet. So this is basically Internet traffic to GCP, and this traffic can be of three kinds. Either it’s http or https traffic. TCP traffic or UDP traffic If it’s HTTP or HTTPS traffic, then there’s just one choice: https load balancing. If the traffic is TCP, and you want SSL offloading or SSL termination, you can use SSL proxy. If you don’t need SSL offloading, then the choices are TCP proxy or network TCP/UDP load balancing. TCP proxy is the way to go if you need global load balancing or want to use IPV6 IP address format. If you want the end applications to be able to see where the request is coming from, you want the end applications to see the request as it is coming from the users. If the answer to that is yes, then you need to go for network load balancing. If you don’t need to keep client IP addresses, you can use TCP proxy. The choice on the internal side is much simpler. If it’s TCP and UDP traffic, then you would go for internal TCP and UDP load balancing. If it’s HTTP or HTTPS traffic, then you would go for internal HTTP load balancing. Now, this chart is very important to remember as far as the exam is concerned. The important thing that you need to remember is that you should choose the type of load balancer based on the type of traffic you are going to serve.

 If it’s internal traffic, then you’d go for internal load balancers. If it’s external traffic, then you’d go for external load balancers. The type of traffic you serve over HTTP, HTTPS, TCP, or UDP is also an important consideration. The choices vary. If you are going for HTTP or HTTPS traffic, it’s very simple; you would either go for HTTPS load balancing or internal HTTPS load balancing. If you want to serve external UDP traffic, then you’d go for network load balancing. If you want to serve internal UDP traffic, then you would go for internal TCP/UDP load balancing. If you have internal TCP traffic, then the choice is very simple; you’d go for internal TCP and UDP load balancing. The choice is a little complex.

 If you have TCP traffic from external sources and want SSL offloading, then you’d go for an SSL proxy. If you don’t need SSL offloading but need a global load balancer or are using IPV6, you should use TCP proxy. If you’re not using SSL offloading, don’t want a global load balancer, and use IPV4, this is the configuration for you. In that kind of scenario, the question is whether you’d want to preserve client IPS. Do you want the back end to see the request as is? Do you want the back end to see the origination of the request? If the answer to that is yes, you’d go for network load balancing. Otherwise, you would still go with TCP proxy load balancing. In this episode about how to choose a load balancer for your workload, I’m sure you’re having a wonderful time, and I’ll see you on the next step.

7. Step 06 – Exploring Features of Load Balancers

Welcome back on the scene. Let’s see what are the different load balancing options that are provided by Google Cloud, and how do you choose between them? Whenever we talk about a load balancer, there are a few important things that we would need to decide. One is: are we serving external customers? That’s basically: are we exposing the application on the Internet, or do we have only internal user accounts? If I have external users, there are different kinds of load balancers that are needed.

If I have internal traffic, there are different types of load balancers that we would need to make use of. Let’s start with external traffic. Let’s say my application or my load balancer is going to be exposed to the Internet. So this is basically Internet traffic to GCP, and this traffic can be of three kinds. Either it’s http or https traffic. TCP traffic or UDP traffic If it’s HTTP or HTTPS traffic, then there’s just one choice: https load balancing. If the traffic is TCP, and you want SSL offloading or SSL termination, you can use SSL proxy. If you don’t need SSL offloading, then the choices are TCP proxy or network TCP/UDP load balancing. TCP proxy is the way to go if you need global load balancing or want to use IPV6 IP address format. If you want the end applications to be able to see where the request is coming from, you want the end applications to see the request as it is coming from the users. If the answer to that is yes, then you need to go for network load balancing.

If you don’t need to keep client IP addresses, you can use TCP proxy. The choice on the internal side is much simpler. If it’s TCP and UDP traffic, then you would go for internal TCP and UDP load balancing. If it’s HTTP or HTTPS traffic, then you would go for internal HTTP load balancing. Now, this chart is very important to remember as far as the exam is concerned. The important thing that you need to remember is that you should choose the type of load balancer based on the type of traffic you are going to serve. If it’s internal traffic, then you’d go for internal load balancers. If it’s external traffic, then you’d go for external load balancers. The type of traffic you serve over HTTP, HTTPS, TCP, or UDP is also an important consideration. The choices vary. If you are going for HTTP or HTTPS traffic, it’s very simple; you would either go for HTTPS load balancing or internal HTTPS load balancing. If you want to serve external UDP traffic, then you’d go for network load balancing. If you want to serve internal UDP traffic, then you would go for internal TCP/UDP load balancing. If you have internal TCP traffic, then the choice is very simple; you’d go for internal TCP and UDP load balancing. The choice is a little complex.

If you have TCP traffic from external sources and want SSL offloading, then you’d go for an SSL proxy. If you don’t need SSL offloading but need a global load balancer or are using IPV6, you should use TCP proxy. If you’re not using SSL offloading, don’t want a global load balancer, and use IPV4, this is the configuration for you. In that kind of scenario, the question is whether you’d want to preserve client IPS. Do you want the back end to see the request as is? Do you want the back end to see the origination of the request? If the answer to that is yes, you’d go for network load balancing. Otherwise, you would still go with TCP proxy load balancing. In this episode about how to choose a load balancer for your workload, I’m sure you’re having a wonderful time, and I’ll see you on the next step.

8. Step 07 – Scenarios – Cloud Load Balancing

Come back in this chapter and let’s look at some scenarios related to load balancers. You want only healthy instances to receive traffic. It’s easy, right? You just need to configure a health check on the load balancer. You can also configure a health check for the managed instance group as well. You want high availability for your VM instances. How can you do that?

 You can actually create multiple managed instance groups for your virtual machines in multiple regions. You can create a load balancer to load balance across these managed instance groups, which are present in multiple regions. So you need one load balancer and multiple managed instance groups. By distributing your application across multiple regions, you get the highest possible availability. You want to route requests to multiple microservices using this same load balancer. How can you do that? The way you can do that is by creating individual MIGS and back ends for your microservice. So if I have microservices A, B, and C with individual managed instancegroups and back ends for each of them, once I have individual back ends for each of these microservices, I can create host and path rules to redirect to the specific microservice based on the path. So I can look at the path of the microservice. If the path of the request is coming to microservice A, I can redirect to the back end of the microservice. If the path starts with microservice B, then I can redirect to microservice B.

One interesting thing about a load balancer is the fact that you can also route traffic from a load balancer to a back-end cloud storage bucket as well. The next scenario is that you want to load balance global external https traffic across backend instances across multiple regions. Which load balancer would you choose for this scenario? The global external HTTP traffic is crucial in this section. I think that makes it really, really simple. You just go for an external HTTP load balancer. Now you want SSL termination for global non-HTTPS traffic, and you would want to choose the load balancer for that. How can you do that? You would go for an SSL proxy load balancer. When it comes to SSL termination, SSL proxy is the way to go. In this quick step, we looked at some of the important scenarios related to load balancers. I’m sure you’re having a wonderful time, and I’ll see you in the next.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!