AZ-700 Microsoft Azure Networking Solutions Topic: Azure Load Balancer
December 16, 2022

1. Overview of Azure Load Balancing Solutions

So we’re moving on to talk about the concept of load balancing and the four major services within Azure that support that feature. First up, we’re going to talk about the Azure load balancer. We’ll move on to the next section and talk about Application Gateway; we’ll talk about Azure Front Door after that. And finally, we’ll talk about Azure Traffic Manager. So those are the four most popular load balancing solutions in Azure. So let’s start by talking about load balancing in general to start. Now, the concept of load balancing is twofold.

One is that you might want to distribute traffic among multiple servers, which basically reduces the load or the amount of effort that a single server has to take. So typically, if you’ve got a web server, this instance on screen might be an example of a public website. Traffic comes in from the top over port 80, and it’s intercepted by the load balancer before it gets to the web servers. The load balancer decides which of the three servers gets this traffic. Now there are different algorithms. This could be a round-robin algorithm where basically the first one gets it, then the second one gets it, then the third one gets it, and it goes back to one like that. It’s called a round robin. As a result, traffic is routed one by one to the web tier, where it is replicated across three servers. So you have three web servers running the same website and serving up the exact same pages. And it’s basically a matter of luck of the draw in terms of which customer or viewer gets sent to which web server. And every request that they make, unless you enable session affinity, goes to a different server.

So this has two benefits, like I said. One is that it reduces the demand on a single server. And the second thing is that it does protect you. It increases availability because if one server needs to be rebooted or goes down, the other two are still up. And so it’s basically protecting you against single points of failure. So there are four load balancing solutions in Azure; like I mentioned, these are the on-screen load balancer, application gateway, traffic manager, and front door. I’ll give you a quick overview of four of them, but we’ll go over them in greater detail and even do a demonstration to show you how we make them. So the Azure load balancer is perhaps the most basic service. There is a free option, so you can create a basic tier load balancer, and it doesn’t cost you anything. And even if you were to pay for it, on a standard tier load balancer, the price is two and a half cents per hour, which works out to about $18 per month. Of course, you also pay for the rules, and the more rules you have, the more you pay. However, it does begin at around $18 per month. Now, if you’re familiar with the OSI model, This operates at layer 4, which is the network layer. That means it does not understand anything in the higher-level layer, such as the application layer.

So it doesn’t understand HTTP or HTTPS, it doesn’t understand URLs, and it doesn’t understand which domain name you’ve entered. It’s limited to IP address ports and things like that. Now, the load balancer does have the benefit at layer four of being able to support traffic other than web traffic. So, if you have a server that runs on different ports, such as an FTP server, an SSH server, or something similar, you can distribute ports other than port 84, four three. Now, the load balancer is restricted to supporting and balancing traffic on a single virtual network. So you can’t have your servers all over the world in different regions. It has to be basically servers running on the same virtual network, and then the load balancer balances traffic on that. The application gateway doesn’t have a free option, so immediately you have to choose to do this because you’re going to pay for it. but it isn’t all that expensive either. So if you look at a basic application gateway again, it’s around two and a half cents per hour, which is $18 per month. It does function at layer seven.

So that’s the application layer. So it does have the benefit of being able to tell you what the domain name is, which path it’s on, the images, or in the root directory, which file it’s accessing. You can create some very creative rules around traffic shaping that are based on the URLs specifically, and you can’t do that with a load balancer. But at layer seven, it’s limited to web traffic. As a result, non-port 80 and non-port 443 traffic cannot be load balanced. There is the option for extra money to have a web application firewall, which basically covers SQL injection and cross-site scripting—those types of attacks that can be protected against—and does support multiple servers and different Vnets. There is an availability zone option for load balancing servers across the entire region into different availability zones. The traffic manager is very cool. It operates at the DNS level, so when the client enters the domain name into their browser, the Traffic Manager intercepts that request and decides which IP address to serve to that specific individual. Based on the rules, basically, that you’ve set up, In the case of Traffic Manager, you may want to send the client to the servers that are closest to them.

So if they’re requesting your domain from Europe, they get sent to the European Web servers. From the United States, they go to the US web servers; from Asia, they go to the Asian web servers, et cetera. So, if you host your website in different domains around the world, Traffic Manager will simply resolve the IP address to different servers for different people. That’s pretty cool. It also supports something like automatic failover. So, if the servers in Asia were suddenly unavailable, say because you were upgrading them or there was an outage, the American servers could take over and your domain would remain operational. Now, it’s not magic. There is some caching involved, so it might take a few minutes or so for your website to come back up, but that is better than being down for hours. And Traffic Manager does support all types of traffic, not just web traffic. And finally, the most recent service The newest service is called Front Door. It’s also a global service, and it’s like an amalgamation of several different Azure products.

So it’s like an application gateway. It operates at layer seven, understands URLs, has a web application firewall option, but is also like a CDN. It supports caching, and it has somelike, app acceleration features if you want to do compression and things like that. So there are some interesting features for serving up web apps globally. It can actually be an SSL endpoint. So if you’ve got public traffic through a security certificate, then the front door service can actually be the place where the encryption happens, which is pretty cool. So those are the four major services. In this section, we’re going to be focusing on load balancers. So we’ll talk about the features that load balancers have and how they work, and then do a demo to see how to create your own load balancer.

2. Choosing the Right Load Balancing Solution

So I found a flow chart on the Microsoft Docs website that helps you decide between those four solutions or combinations. I know we haven’t really talked about the details of each yet, but I do want to show this diagram, this flowchart, go through it, and hopefully that will clarify a few things before we actually get into describing them in a lot of detail. So here’s the flowchart. We begin here, right here. Obviously, if your solution is not a web application that handles HTTP and HTTPS, it severely limits a lot of your solutions and options because Application Gateway and Azure Front Door do not support non-web applications.

As a result, you’ll be limited to load balancing solutions as well as combining them with Traffic Manager. So if you have an application that’s internal only, then you’re just going to go and use the LoadBalancer Standard Edition and get that service level agreement. If it is web-facing, maybe you want to have a global load balancer like Traffic Manager. And so you’ll have your solution deployed to multiple regions. You’ll use Azure load balancing locally and TrafficManager to load balance between the regions. That’s the simple case. Now if you have a web application but it’s not Internet-facing, then you’re just going to use Application Gateway because it is layer seven. It is a more sophisticated product, but because you don’t need any of the Front Door Traffic Manager Internet-facing options, that makes your decision pretty simple. Application Gateway is the answer. So now we come down to: is your application globally deployed in multiple regions? It’s either yes or no. If it’s not, then you are pretty much going to go with an application gateway, even if it’s not global. Okay, now maybe you have some additional Azure Front Door performance.

Acceleration is a feature of Front Door, so maybe you want to put Front Door there. But generally, the answer is an application gateway. So the most complicated solution is that it’s public-facing, web-based, and global. So then you’re going to have to ask the question as to whether you need to have your security certificate SSL offloaded as soon as it hits the front door, and then you’re going to do unencrypted traffic behind the scenes. If that’s true, then you’re basically forced to use Front Door because it’s the one that supports the SSL offload. If you don’t need that, then it’s a question of whether you’re going to have VM traffic, Kubernetes Service traffic, or Azure App Services or functions. And so different solutions support those three. Front Door, as you can see, can handle everything if you only do AppServices, because AppServices has load balancing. Built-in functions don’t require load balancing per se in the consumption plan because Microsoft takes care of the scaling.

AKS again: you’ve got clusters and nodes, and there are scaling options built into AKS. And so some of that stuff is handled within the Kubernetes cluster and virtual machines. You may have a virtual machine scale set, but basically, this is your solution. front door in the front and an Azure load balancer, funny enough, on the back end. So as you can see, it’s a very interesting chart that breaks down why you would choose each solution. That was hopefully helpful. And we’re going to get into, as I said, each of these. and so you’ll understand the features. But having a chart like this helps you understand your solution and why you would go down each path, why it’s recommending a load balancer and not an application gateway, etc.

3. Overview of Azure Load Balancer Service

Now there are two SKUs, or product codes, available for the Basic Edition and the Standard Edition. Now, one of the biggest benefits of the Basic Edition is that it’s free. Now we’re going to see in a second that it does not come with some features, and there’s no SLA. So of course, for free, you get what you pay for. Here are the features of the basic load balancer: You can have up to 300 instances, which could be virtual machines or servers, in your back-end pool.

So when the traffic comes in, it can get distributed to up to 300 servers behind the scenes. It sounds like a lot. It only supports virtual machines in an availability set together or virtual machines scale sets VMSs as back-end pools. In terms of having health probes, of course the loadbalancer does check the servers to make sure they’re still running, and it uses a health probe for that. Well, it supports TCP healthprobe or unsecured port 80. Http only supports HTTP for health probes; it does not support basic availability zones or these high availability options. Of course, when it’s set up, it doesn’t come with any security. So if you set up a public load balancer that is open to the internet, you don’t have any protection. You can add a network security group to ProtectThis, but it doesn’t come with it by default, and there’s no service level agreement. As a result, Microsoft makes no guarantees about the uptime of this service.

If you are willing to pay, you get the standard load balancer, you get upgraded from 300 instances to up to 1000, and the backend pool becomes any VM. So it doesn’t have to be in a single availability set; any VM or virtual machine scale set in a single virtual network will do. And so load balancers basically distribute traffic within a single virtual network, not across multiple virtual networks or across a region. It supports the HTTP health probe. As we said, there are also availability zone features for standard load balancers. If you’re running an internal load balancer, you get the high availability option that security comes with by default. So basically, security is going to be blocked by default, and then you have to sort of add exceptions to allow external people to get into your load balancer. And it does come with a service level agreement of 99.99%. Now, I mentioned the basic and standard load balancers. Those are basically one of the first decisions you’re going to make: whether it’s a public-facing load balancer or an internal load balancer. and we’ll show the difference here. So earlier, I showed the load balancer, the public load balancer, with a web farm effectively on the back end.

Well, here’s the bigger picture, and you’ve got two load balancers. There’s one public load balancer over port 80 that supports virtual machines in the back end, and then those virtual machines point to an internal load balancer that is not available to the public and that can basically load balance the servers at the business tier. And you’ll notice that everything takes place within a virtual network. To put it another way, load balancers can be public, internal, or private. And you make that decision when you configure it so you don’t get a public IP address. We saw some of the features, like the Hiveavailability; it’s for the internal one only, et cetera. So that is the overview of the load balancer. In the following sections, we will switch to the portal, create a load balancer, deploy some servers behind it, and examine how traffic is distributed. So stay tuned for that.

4. Create a Load Balancer DEMO

So let’s create ourselves a load balancer. We’re going to switch over to the portal, go into the marketplace, create a resource, and I’m going to type “load balancer.” Now I’m going to warn you because this is one of those black holes within the marketplace where you type the name of an Azure service fully and you do not get the Azure service. In the first page of results, I’m going to filter on Microsoft, and then we’ll see Load Balancer as a service. So it’s potentially a poorly named service, even though it perfectly describes what it does.

So here we are with the load balancer. I’m going to click “create.” Now I’ve been putting stuff into my AZ 700 group. Continue with that practice. Load balancer needs a name like any other resource: AZ 700 LB plus, with the abbreviation for pounds, which is interesting. Now, we said here a couple of videos ago that the first big decision that you make is whether it’s a public or private load balancer. So we’re creating a public load balancer in this particular case, which requires a public IP address. Then the next decision you have to make is whether to go with the basic SKU or the standard SKU. We also discussed the feature distinctions. Standard skew, availability zones, and other such things are permitted. We’ll keep it in the basic SKU, and it is free. Now that it’s a public load balancer, We do need a public IP address. So give it a relatively unique name for my account; the basic IP address We’re not going to reserve an IP address.

If you wanted to do this for a website, then you may want to create an IP address that you can keep, and that way you can move things around and you don’t lose your IP because of domain names, registries, and things like that. But we’re going to use the dynamic, and we won’t implement IPV6, although it does support IPV six.We’re not going to assign any tags to it. We can just say “review and create.” And, as you can see, the load balancer only has seven or six properties. Now, when you go in to create the actual load balancer, it doesn’t take too long because really, there are no actual instances being started. This is just—again, I believe load balancers are entries in the table in terms of routing. So it’s not like this is going to take 45 minutes to create. So that took about 45 or 46 seconds.

You can say, “Go to the resource.” And the real trick with load balancers and even when we get into application gateways is configuring the front end, the back end, and the rules for load balancing and even health probes. So you can see under “Settings,” we’ve got four tabs here that are equally important. Now in terms of the front-end IP, we did create a public IP address, and so there is a single public IP address relating to this front-end. Now, when it comes to the front end, we did have our public IP address that we created at the time of creation. It actually does support multiple IP addresses on the front end. And so we can just say “add” and give it another name and choose either an existing iPad address or to create a new one. And so we can basically have several front-end IP addresses. This would be an example of having several public-facing websites, but having a single load balancer handle traffic for a number of them. So you could have several front ends. Now, a lot of times this is done by domain name, but as we said earlier, this is a level 4 load balancer. It only understands IP addresses.

It does not understand domain names. And so it can only do these things based on different IP addresses. If you want to use a load balancer and you want to differentiate between traffic from different websites, it has to be by IP address. We’re not going to add any at the current time. Now, switching from the front end to the back end, the back end is the servers that are going to receive the traffic, do the work, and respond. And so the whole idea of the load balancer is that there are at least two servers that are going to be able to do the work equally, and the traffic is distributed between them. So we do need to add a pool, which is a group of servers, to this, at least one, to be able to do the work.

So I’m going to call this BN pool one. We do choose which virtual network we’re going to choose from. So remember, the load balancer works at the virtual network level. The availability setting works at the regional level. So we are limiting ourselves in this particular pool to a single virtual network. And we could, at this point, create the pool with no servers associated with it. We might have to do that because our only other two choices are virtual machine scale sets. We don’t currently have any on our account. We haven’t created it yet. and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and Now, in fact, we do have virtual machines. In this course, we’ve been experimenting with them. But these machines are not part of an availability set.

And so all virtual machines must be in the same availability set. So we can’t just add loose VMs to a basic load balancer. We have to only add VMs that are part of the same availability set. So we’re either going to have to recreate some VMs, put them in an availability set, or fire up a virtual machine scale set. So I’m going to leave this unassociated for now. So we’re creating an empty backend pool just for the purposes of this demo. And in order to test it, we’re going to have to create some servers, and we’ll do that in a moment, I guess. So we have an empty back-end pool. Now, there are two things left that were required and one that is sort of optional in terms of health probes. The purpose of a health probe is to ensure that the servers of the backend pool are healthy.

So we can create a health probe, and we’ll call this health probe One. And we have two choices of health probes. You can either go to an HTTP request, which is like a get request, or to a specific path, and then we’ll see if it responds. Or you can just check the TCP, which doesn’t take into account that the webpage is actually returning status 200, right? So that’s a different response. Now you can see here that TCP port 80 is the web port with a 5-second interval. So every 5 seconds, this health probe is going to be checking port 80. If it can be connected to and if it fails twice in a row, then it is considered that this port is dead, not responding, or unhealthy. This is a fairly aggressive setup by default. So really, it’s only giving your server 10 seconds to fail before you kick it out of the load balancing club. You can increase this by, say, 15 seconds.

And so that’s a whole 30 seconds for the server to fix itself before you kick it out of the club. so I can create this health probe. Hopefully, it will work without the use of any servers. Now, the health probe isn’t active yet because we haven’t created the load balancing rules. So this is the last step to creating the load balancer in this particular setup: the rules. So the rules are basically: when traffic comes into the front end, under what condition is it sent to a back end pool? And you can have multiple front ends and multiple back end pools.And so your rules could actually say, “When it comes in over this IP address, I want to send it to this server.” When it comes over to the other IP address, I’ll send it to the other server, and we can attach the health probe to it as well. So I’m going to add a rule. Here’s the basic rule: And we only have one front-end address, so all traffic is subject to this rule. There’s no differentiator.

Any traffic that enters the load bouncer will be subject to this rule. Now, since this is a level-four load balancer, we can support non-http traffic, including UDP, so we can set up rules differently for different ports. If you have an FTP or SFTP server, you can set up one rule for web traffic, even RDP (I believe it’s four for three SQL Server), because each software uses a different port. As a result, you can create different rules for that. So we’re going to connect port 80 on the front end to port 80 on the back end. And so you can see in this technique that you can do a little bit of address translation, right? So you can accept some traffic on the front end and send traffic to the back end on a different port. For security purposes, you could put it on port 80, 80, 5000, or something like that. There is only one back end pool.

 As a result, traffic arriving at the front end via port 80 will be routed to the back end pool. And we have to require a health probe. And so this one is the health probe. Now this is where we can basically set up “session persistence,” which means that a person who gets sent—say you have five servers in the back end pool—gets sent to server number one. They can then ensure that they always get sent to server number one. So traffic from a client should always be handled by the same VM. This is not a great practise because if the health probe determines that VM one is unhealthy, then something bad is going to happen to client one because their traffic will be sent to VM two. And if you’ve got something that requires the VM to be alive to successfully serve the client, such as memory sessions or local files on the local machine, then that session got broken. So I would not set this up unless you really had to.

The other possibility is that you have this concept of a keep alive. And so you allow the four-minute session by default with a client, and then after four minutes, the session is considered inactive, requiring you to reopen the connection. So we’ll just set up the load balancing rule, and then at this point, without any back-end servers, our load balancer is considered active. So without the load balancing rule, the server itself is actually not even running. And so here we have the load balancer basics. It has all the rules in place. We don’t have any servers, but we can’t really test it right now. I could, in the next video, create myself a quick little virtual machine scale set with a web server on it. And then we can test this because we do have a public IP address from this, and we can test this in the browser. Why don’t I?

5. Testing a Load Balancer

Alright, so I’m going to create a new virtual machine real quick. I’m going to call it LB Server One. And you know, with the basic load balancer, we do have to put it in the availability set. And so I’ve chosen availability set. There are none that are available. So I’m going to have to create a new availability set and leave it with the defaults. It’s a Windows Server One CPU.

I’m going to need to RDP into it. We are also going to need the web port. I’m just going to do port 80 for now. We won’t do SSL because this is a web server. We do want to test this. So it’s getting a brand new public IP address, which we’ll need to put on the front-end tier. I don’t want boot diagnostics or anything special. Okay, I’m going to hit the create button, and when it’s finished, I’m just going to remote into it, install IIS, get it working, show that we have a web server, and then we can add it to the load balancer.

Now I could have added it to the load balancer. I’ll just show you that real quick. as part of creating the virtual machine. So we have this virtual machine in an existing load balancer in the networking tab. I could have chosen that, but we’re going to do that as a separate step just for training purposes. Okay, so let’s get RDP into this thing as soon as possible. So, just like before, we’ll make this into a web server by going to Web server. We’re not going to be too concerned about what features it has as long as it serves up the demo web page. All right, so we successfully made this a web server.

Now let’s make sure that we can access this web server from outside of Azure. So I’m going to copy the IP address, open a browser window, and paste in the IP address. And now that we know this, it works. So it works as a standalone web server. What we want to do is make this one of our backend servers. Now, I could obviously see that it’s an availability set, and so I could add multiple servers to this availability set. But just for this demo, there’s no need to create multiple servers. We’re going to go into the back end pools of our load balancer, go into the back end pool, go to the virtual network that contains our virtual machine, and say Add. This virtual machine is also a component of Availability One. So I added it. Click save. So now it is part of our back-end pool. I believe it’s all been fully deployed.

So I should be able to copy the iPad address and open that in a browser window. And we have our server. So this is being served through the load balancer. We could remove the public IP address from the backend server and we’d still have access to it, but there is some complexity to it. I would say that this is pretty simple, but it’s not dead simple. There is some complexity in terms of the types of servers you can create in the backend pool. Again, we’re running a basic SKU. If we had a standard SKU, we’d have a couple more options. But once we set up these front-end, back-end health probes, and load balancing rules, then this load balancer is going to evenly distribute the traffic among all of the servers. In this case, we just have one. But if we had five or six in the same availability set, it would do that. or a virtual machine scale set.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!