- Introduction to Containers
So in this section, we’re going to start talking about containers. Now, if you don’t know, the container model for deployments is something that’s been growing over the past few years and has taken over cloud computing in a big way. The real advantage to containers is that you create the image of the container on your desk, stop it on your development workstation, and that same image can be deployed unmarried into development, staging, and production.
It can go into the Azure, AWS, or Google cloud platforms. You can push that image anywhere and it will run in a container environment. And so containers are a cross-platform way of deploying code. A web application can only be run in Azure. In a private cloud environment, you cannot run a web app on your local machine. A web application cannot be run on AWS or Google Platform. You can’t run a web app on your local machine, but you can run containers in all those places. So Azure can embrace the container model in a big way. So we have previously looked at websites, and you can create a container web app, give it a name, put it in a resource, deploy it into a service plan, and then you can push code in there as a container log, right? So you don’t notice there’s no dropdown indicating whether the underlying language is PHP or NodeJS.
There’s no language choice because the container contains everything; it needs time. So you can choose whatever language you want because the container has to support it. So the web app container model is one way to do it. You get all the benefits of web apps, including the ability to have scaling, auto scaling, monitoring benefits, and all that stuff. That’s just one way to run containers. Kubernetes is possibly the most popular and well-known method. Kubernetes was originally a Google standard, but it has since evolved into one of the industry standards for container management. But it does so in such a clustered way. So you’ve got multiple underlying virtual machines and an orchestrator that has to control it all. So Kubernetes is an enterprise raid service that requires a lot of server resources. So this is not cheap, nor is it the easiest way to run containers, but it has industrial strength. So if you’re planning to have your code in containers and you want that to have high availability, all of the tools that are provided to run this Kubernetes service are the way to go. and we’ll get to that in a second.
Another way to run containers that we will talk about in a future section of this is through a container instance. Now the container instance, Microsoft says, is the fastest and simplest way to run a container. And so this does not support scaling. And then you don’t even have access to hardware. When you do a deployment of a container instance, you have to identify the image that will run it. So, for me, this is a demonstration of its evolution. It’s a quick look at something running in a container. This is not necessarily a long-term production model. Container instances can be used indefinitely or burst. If you have a Kubernetes cluster and you need to grow, you need to add nodes quickly. You can burst into a container instance because it is the fastest and simplest way. Adding a new node to your Kloot Kubernetes cluster It takes a little bit longer time.It’s like spinning up a new VM and then deploying all the code there, etc. I can also say that you could run a container virtual machine, of course, but then you would have to install your Docker or the container software into that virtual machine, and you’d be responsible for that. Finally, there is a thing called service fabric.
This is not going to be covered in this course, but service fabric does support containers. This is now more well-known for their microservices model. It’s a completely new way of running code. But you can run this as a Tannerized model. And so you have your cluster of servers. It’s almost like an alternative to Kubernetes. It’s a Microsoft-proprietary alternative to Kubernetes. Service fabric is cool because you can add servers outside of Azure to a cluster. So you can have five Azure servers and five servers in the United States. and three servers in your own environment. And those are all managed by the service fabric cluster. It’s really cool to support a containerized model. But again, this is not part of the exam. It’s not part of this course. So in this section, we’re going to be talking about Kubernetes. Then, later on, we’ll look at Azure container instances for containers. So stay tuned.
2. Create an AKS Cluster
So let’s start off by looking at Azure Kubernetes services. I’m going to go into the Creative Resources section, and I’m going to go under Containers. Now we can see Kubernetes services right here on the populist here. But I go under containers because we’re going to talk about container interest instances in a little bit before going to service.
Now, this is the standard tab format for creating resources. Remember how I mentioned in a previous video that Kubernetes runs on virtual machines? It’s basically an abstraction or an orchestration layer on top of a virtual machine. So we’re calling this new Cubes. Give it a resource group name that you want, and then you cube. Again, this is a cluster name. So now we’re creating a cluster of machines, and it’s going to be running in a resource group. Now, virtual machines are sensitive to different regions.
I’m going to choose a Canadian region here. You can choose whatever region is good for you, but for testing purposes, you might want to choose one of the more inexpensive regions. Now, there’s sort of a standard that Google created and has released, so it’s a standard that gets released pretty regularly. We can see different Kubernetes versions here. So if you’re deep into Kubernetes and you use it already, this might be important to you.
Obviously there’s the default, which I would guess is the production-stable version. You have a variety of alpha- and beta-peptides to experiment with. I would always choose the default unless you have a reason to. Now again, very similar to virtual machines, we’re going to choose the nodes that it’s going to run under. It has selected the DS-2 V2 node for me by default. Now you’ll see that your node is going to cost me $130 a month. So this is the kind of thing that can get expensive for just playing around. Now, I’m not aware of any restrictions or scaling for even using some of the B series.
However, because we’re only going to do this for eight hours, we’re only looking at three to four hours per day. So it’s about a dollar every six hours. So if I can shut this down within six hours, it’s only going to cost me a dollar per node. I’m going to leave the default there. Now we do get to choose how many nodes we want to start off with. Now we can start off with as little as one node and go up to 100. Now my account and many others have limits on the nodes that we can create. I do this on purpose because I don’t want to create a 100-node server and just forget about it. And so I’ve asked Microsoft to set up my account in this particular instance, Canada Central. It’s intended that you do it by region. So I can order this. Since this is a two-core server, the maximum number of nodes I can have is five.
I suppose so long as no other servers are running on that machine. So I’m going to leave this at three. And now the nodes are basically the underlying hardware. The concept of node pools effectively lets you run several painters in a single node pool. So you create a minimum onenode pool and assign resources to it. You can set up multiple node pools and run your containers in each one. So as we can see, we’ve got a single pool already, and it’s taking up all three of the nodes that we’ve assigned. If we were to know the pool, we would have to divide the resources between them. This concept of virtual nodes Now we’re going to talk about water container instances in a moment. But when scaling a virtual machine, we’ve seen in a previous video that it takes several minutes for a brand new virtual machine to be created. Then we have to deploy an image to that virtual machine for the cluster for Kubernetes. So ACI is theoretically a much faster deployment model. And so, if you really need proof, you can’t afford to wait even five minutes for what they’re calling “burstable scaling.”
This is a concept of burstible scaling using ACI. We’re going to leave that disabled for now. As we can see, we now have a scale system. This is enabled by default, or we can just want some independent versions. And then we would be able to enable some of the auto-scaling features without using scale sticks. So we’ll leave that enabled. We do have role-based access control enabled, which we can start using to assign users various roles to manage this cube. So basically, leave that enabled. Go into networking next. Now, since this is a virtual machine, as Scale said and as we saw, we’re going to create a brand new network here. We’ll form a network security group to handle security. And we may set up some basic DNS settings so that we can connect to this cluster.
The common name is the host name, effectively. Load balancing is set to “standard” by default. We are using public IP addresses with this, and we’re going to leave all the defaults in terms of network policies and application routing. We do want to turn on monitoring. We’ll talk about monitoring in a bit. We do want to get data out of the containers and into Azure so that we can use the Azure tools like Azure Monitor to manage and monitor performance. Otherwise, we could use the cube data monitor, but we want to get that data out as well. We’ll leave tags alone, and finally we can click Review. So by clicking “Review,” it’s going to do some validation right now. But we’re creating a three-node cluster; scroll down here. We’re making a three-note cluster of the DS-2V-2 type. So that’s going to be a six-CPU configuration. And when we click “Create,” it’s going to go off and create our first Kubernetes cluster. So I’m going to do that. It’s going to take a little while, and when we come back, we’re going to have a running Kubernetes cluster.
Alright, so we can go into deployment details under the Kubernetes resource that I created, and I can see that it took about four minutes for the cluster to be deployed. That’s those three nodes with two CPUs each, etc. I’m going to click on “go to resource” to go directly to the resource.
Now, there’s not much to see from the Azure side of the portal. The version number is the API server for connecting to the Kubernetes controller in a successful state running in Canada Central. Let’s go under “node pools.” Remember, we only created one node pool fault. That’s what’s got the resources assigned to it. So with the Linux operating system and three nodes created for this pool, if I wanted to add another pool, I could have a different Kubernetes version and a different operating system by size. You know, we can mix DS Twos and DS Fours by having multiple node pools, even though this is said to be scaling because we are operating under the nodepool design, which can’t scale everything from the top level.
We have to go under node pools under the three dots here, and then we can add another node or scale up or scale down, et cetera. We can even change the Kubernetes version that it’s running. So I think at this point, like I said, there’s so much to see here. Let’s go into Cloud to learn more about this Kubernetes service. Now remember, Cloud Shell is a command line running inside the Azure Portal, and it does support both PowerShell and Bash, although I’m going to be working in Bash for this example. You could now run this as a bash command in your own state that is installed locally. Not only do you have to then connect into Azure in order for it to recognise you, you also have to run the AZ AKS command that will allow your local CLI to connect to AKS. Download the appropriate software.
Cube control. K-U-E-B-C-T-L. You must do this on your own to obtain items. This is already installed in the cloud shell. So by default, we haven’t connected yet to our AKS. We do need to run a command called Azed AKS at Get Cradles. Now I think I can hit the tab key here, and it’s going to autocomplete the commands. That’s useful to know if you can’t remember how something is spelled or are confident that you’ve spelled it correctly. Now, in order to connect to the AKS, we’re going to have to get the names of the things that we’ve created. Now start with the resource group. I have to go into this window resource group, which I call Azsjd Newcubes. If you look up here, we also have the name of the cluster. And remember, it’s a new SJD cube. And I can see it—it’s right up here in the top left. That’s the name of it, so this is going to get the credentials to connect to the Kubernetes cluster.
Now that I have the proper context, I can begin using it. As previously stated, the main command is called Kubecontrol kubectl. And we can just start by looking at what we already have running, say, get nodes. And so now we’re not talking with the portal. We’re talking with the orchestrator—the traffic cop, if you will—that’s running inside of Kubernetes. So now we can see the three DSU servers that we have. They’re in a running state. They’ve been running for 27 minutes, and they’re all running this version of Kubernetes. So at this point, we’ve established that they’re running and that they’re ready to go. Now what we’re going to want to do is sort of deploy an app. So right now, there’s no server running. There are servers running, but noas or Tomcat are waiting for us to listen. We have to basically deploy that. So in the next video, I’m going to push a sample application to this cluster, and we’re going to see it running.
4. Deploy a Container to AKS
Alright, so I went online and found what’s called a YAML file. It’s YAML, and it’s called Azure Vote dot YAML. Now this is the set of instructions that is going to tell Kubernetes where to get the images to deploy an application for us. So I basically copied and pasted this YAML file into the Azure Cloud shell. So if I close this up, I go into the Azure Cloud Shell. Now I’m using the VI editor.
This also supports Nano. And we can go into this YAML file and see that there are a set of specifications. It’s going to tell you what containers it needs and how much memory to allocate to it. And if we go down a little bit, we can see that there’s the backend service that’s running on ports six through nine. There’s the front-end application that’s running on port 80, which is the web server.
And it’s going to be using Redis as the back-end environment. And then there’s a load bouncer that’s going to be deployed as well. OK, so there are basically three services that are going to be deployed as part of the CMA file. So I’ll go to VI, and you’ll go to colon X to finish that. Now I can basically tell Kubernetes that the command is “apply.” So, if I see Azure Vote and tap it, YML, this will take the set of instructions that are deployed in this YAML file and push them to the running nodes.
Now, I don’t, unfortunately, have a course that can teach you how to write Yamo, and that’s going to be a whole separate thing. You’re not going to be required to write YAML as part of the exam. Simply know that you have a set of instructions that you can then use the apply command to have the container execute. So I just hit apply, and you can see that the services for the front end and back end have been created. That’s pretty quick. Now, if we go back up to Get Nodes, we can see that the nodes are still there; they’re still in a ready state. The command that we’re going to want to use is called “Get Service.” So now we can see what is running on top of those nodes. We had two services, the front service and the back service. Remember, one is on 6379 and one is on Port, and they’ve been up for 30 seconds.
To access those, an external IP address is required. Now none of this is in a pending state; it was literally deployed as quickly as we showed it. The external IP address is listed here. And so if I were to go into a web browser window and put in that external IP address, I would have a working front-end application. Now it’s basically using a rediscovery in the back-end service. So I’m able to basically push a button, get a counter to increment, and I assume that if I refresh it, then the counters are remembered because it’s pulling from Redis. So it’s a very simple application. And we saw how easy it was to deploy prepackaged code because these were images into a Kubernetes cluster.
Now, Kubernetes is not the simplest container solution with Azure, right? So we are now dealing with multiple nodes, we’re dealing with a container, and we’re dealing with cube control in order to access it. And we can sort of see the complexity right here. So if this is something that’s beyond what you’re going to tone down, then we’re going to have to basically do a simpler container solution, such as a web app or an ECI. Now, in a previous version of thislesson, I talked about a dashboard. So if I minimise this and go into the source, cube control used to have a dashboard, and there was even a button right off the Kubernetes page here that gave you instructions on how to get the control dashboard. What you used to do is you would use the AZAKS browse command, and that would bring up a dashboard. Let me show you what happens when you do that.
Now, this does not work. So a new service has been created, and as we can see here, we have Kubernetes and a lot of great stuff in here, but we have error messages right away, and none of the screens contain any data. I’m going to close that down and go back up to here. Okay, so basically the AKS dashboard, Kubernetes, is not working inside the cloud shell. So if you’re running this in your own CLI on your own system, there are ways you can get around it by changing the port number that it works on, by setting it up as a proxy, and then having it run on your own local. However, the dashboard works in the cloud. Anyways, in the next lesson, we’re going to do a couple more commands here in the CLI with our little Kubernetes cluster before we shut it down.
5. Scaling Kubernetes
Alright, so let’s talk about scaling. Now we’re going to start by looking at cube control and getting notes. Remember, we have three servers that are willing to do work for our application. If we look at what our deployments are like, say, cube control get pods, we can see that currently we have one front-end container and one back-end container.
We can see by the counts that we get a little bit more information, the minus O wide parameter, and we can see where it’s running. So the back end container is running on the first VM, zero. And the front-end container is running on VM Zero as well. So our understanding right now is that the first, second, and third virtual machines aren’t completely unused, right? We have a three-cluster AKS, and it’s only used for servers. So scaling on AKS is interesting because there are two ways to do it.
One is that we can simply increase the number of pods. And so we can see the queue control scale replicas equaling three, and let’s say I want to make it three. So I’m going to cheat a little bit. Let’s say I get three. I can say “queue control scale replicas three” and then name the replica I want to scale. And so in this case, it’s the AzureVote front deployment, which is this one. And when I run it, it says scaled. And if I go back to get pods in the wide view, we see that it’s running and that there are now four entries, one for the back and three for the front. And what’s interesting is that now we get one deployed into the second VM and one deployed into the third VM. So now we’re actually seeing that we’re increasing the resources.
Now, I should mention that the way containers work is that you specify the amount of resources that it requires in the YAML file. So if I go back into Azure Vote, YAML will skip over this first section, and we will go into the Azure Vote front deployment. Then we can see here that it’s working with a certain amount of CPU and a certain amount of memory. In this case, it’s 128 megabytes, and it has a limit in terms of the maximum. So the minimum and maximum that it needs So remember, these are DS-2 servers. We can quickly look it up. Azure VM DS two So the DS-2 server has 7 GB of memory. And so if we’re only requesting a maximum of 256 GB for a container, then that can contain 28 containers on a single server.
If I exit out of this, I should be able to scale this out to 28 nodes and not even run out, and that’s just a single server. Prepare for the craziness. However, we can see that this is between 210 and 211, and so on. And we know what resources the containers require and how they can be distributed across all of these servers. Now it’s a bit chaotic, obviously, but scaling this down is fairly straightforward as well. So if we said, “Okay, well, 28 is obviously too much, we can scale down to one, then we are back.” It’s going to terminate all of these things. But we’re back to being the front end of the back and running. Now that’s only one way to scale. So we haven’t even scaled the pods; we are not increasing the costs. We’re using the existing servers that we’re paying for, and we’re just getting more out of them because each contains a limit in terms of how much resources it can take.
If we needed to scale these servers themselves, remember, we were up here in the portal. Take me back to here. But we can go under node pools. And of course, we can manually scale the number of nodes if we ever need to do that. However, they are known as the autoscaler. Let me just check that we’re back to just the two pods running. So we use the auto scaler to increase the number of servers as more are added. So let’s say we were to go beyond 28 x 3. We were to go beyond 78 containers. Then we’re going to need more servers. Well, if we were to enable the auto scaler, then we’re going to be able to have more servers created when more servers are required so that you can set the minimum, maximum, etc. number of CPUs. It can deploy or scale back on those servers.
6. Installing Docker Hub or Docker Toolbox
So we’ve been talking about containers, and until now we’ve been using the Microsoft demo of the Vote app, and you might be asking yourself, “Well, how do I turn my app into a container that I can deploy into Azure?” This is entirely optional, but I believe you should install Docker on your computer. You might be a Linux user, you might be a Windows user; it’s going to be an individual thing. I happen to be using the Ten Home version. So those are the instructions for downloading Docker for Ten.
If you want to go down the path of creating your own applications, uploading Docker images of them to the Azure Container Registry, for instance, or Docker Hub, and getting that into a Docker container in Azure right off the bat The system requirements for what’s called Docker Hub for Windows are 64-bit pro, enterprise, or education. And, of course, there are Mac and Linux instructions as well. So I don’t happen to have this, and so I’m going to have to do what’s called Docker Toolbox. Now that Docker Toolbox is outdated, a lot of people don’t like it as much as the Docker solution for Windows. And so what you end up having to do is have two options, OK? So one is that you can download a Docker Toolbox, and so there’s a link to Docker Toolbox. You can either download the app and install it on your operating system, or you can create a virtual machine. So Oracle bought the company VirtualBox a couple years ago, and it’s basically a free machine that you can run on your own computer, and then you can install whatever operating system you want into that.
And so what some people do is get a virtual machine installed in Windows, install Linux in the virtual machine, and then use the full version of Docker Dex Desktop on Linux in the virtual machine. Hopefully, that makes sense. You need to make a decision as to whether you want the older legacy solution called Dockbox on an older version of Windows or Windows 10 Home, or if you want to install VirtualBox and be able to have the latest edition, the updated edition. But it’s not as convenient because then you have to go to the virtual box and start up your Linux to get into Docker. So, hopefully, that makes sense. I’m going to include a link to this video. And so you can make a decision about whether you can use Docker Hub if you have the compatible operating system, Docker Toolbox, or install a virtual machine and get the latest version.
7. ACI Azure Container Instances
So now we’re going to talk about container instances in Azure. Container instances, according to Microsoft, are the quickest way to deploy a Docker container into Azure. Now, this contrasts with an Azure Kubernetes service, AKS, like we saw, that does take you through creating servers and waiting for those to spin up. And there’s orchestration, and it takes a lot of effort to get AKS up and running. And then you have to manage that. And we talked about auto scaling, pod scaling, and all these types of scalers that have to go into it.
With container instances, you simply provide the image to Azure, and Azure will run it. So it’s quick and easy. So there’s no virtual machine management; nothing fancy. Now the third type of container is the web app container, which gives you all the chairs of an app service in terms of deployment and things like that. But then you’re managing an app service plan. So Azure container instances are just here as an image; please run them. So we’re going to demonstrate this, and we’ll see how quick it is. We’ll go into the Azure Portal and go under “create a resource,” and I’m going to type a container; I might have to go into “instance” to find it. And we can see a very familiar link to choose our subscription.
We’re going to create a brand new resource group. I’m going to call this group ACI group. The container needs to have a name. And so we’re going to call this new container, and we’ll deploy it into a specific region and then go back into, let’s say, Canada, and then go into Canada Central with this. Now we have options for pulling in images. Now you can pull it out of a Docker hub, which is a registry that you can deploy your Docker images to. Azure has its own ACR Azure container into which you can push images that are private to you, which you can then connect to an ACR registry and choose images from. And finally, there are the quick-start images.
And the Quick Start images are literally things that Microsoft is providing to you. In this case, a Hello World, an NGINX server (which is a very fast web server), and a Nanoserver (which is the tiny IIS web server) So with the hollow world option, we do not get to choose some elements of size. As you can see, fault, we’re getting a small server, one virtual CPU, and 1.5 GB of memory. But the choices that we have are only from one to four in terms of the number of CPUs. So it’s a very constrained server type. We’ll just leave that as the default. And with the networking tab, we can actually give this a fully qualified domain name. So this is where I can see the new AZ SGD container. And as long as it’s valid in the region, then I can have a publicly accessible URL for this. And we can see the ports that are available.
So I’m going to click review and create. We can see here what’s going on and click on the “create” button. I’m just going to watch the clock and see how long this takes to actually deploy. All right, so let’s look at the deployment operation details. And we can see that we have a container provisioned and running in 37 seconds, which is a lot faster than an AKS container. Now, you’ll notice that when we’re using AKS, container instances are a burstable scaling option. So if you have a S cluster and you need a container to expand very quickly, you can choose the ACI option for bursting up. Let’s say you’ve got four nodes and you need to get to five. Then you can use an ACI container to get the five quickly, rather than waiting for a new server to spin up and potentially having performance issues.
This is known as burstable scaling, and ACI can be burst from an AKS. So let’s go into the container pretty quickly here. Remember, we created a fully qualified domain name. I’ll just copy it to the clipboard and paste it into a browser window where it says “welcome to container instances.” That’s the Hello World option that we chose. So ACI is great for development, demonstration, and small applications that don’t really need scaling, monitoring, or metrics, but it’s basically containers that you can use instead of having to spin up a whole solution to run your small application. And again, this can give me an IP address with a fully qualified name that I can then use a CName record to put a proper domain on there.As a result, you can port not only running-container options, but also a multi-server nodesclutter orchestration model as an Aka.