Amazon AWS Certified Solutions Architect Professional SAP-C02 Topic: Design for New Solutions Part 7
December 16, 2022

71. Placement Groups (S)

Everyone, and welcome back to the Knowledge Portal video series. Now, today we’ll be speaking about one very important topic called the AWS Placement Group. Now, the reason I said it was important is that this is one of the most important questions, particularly during exams. However, if you go into interviews specifically related to solutions, such as those for architects or DevOps, you may be asked this or other questions that are very related to placement groups. So, this is very important, and I hope you stay focused. So let’s begin. Now, a “placement group” is like a logical group of instances within a single availability zone. Now, one of the questions that you might ask is, “Why do you need to have a logical group of instances?” And the answer to this is that certain applications require very low latency and high network throughput.

Now, remember, in one of the organizations that I used to work with earlier, there were two servers that used to exchange data that was in the range of 400 to 500 GB per day. So it was like a cluster. And during that time, having a very high network throughput was a primary requirement. Because let’s assume that you have one server with very fast resources (very high RAM, CPU, et cetera). And you have a second server with a similar instance type. Now, if you’re exchanging 400 or 500 GP data between these two servers and if the network is slow, then the overall application processing time will be significantly slower, even though the instance type that you have chosen is quite high. This is one of the reasons why network throughput is critical for so many applications. So let’s understand this with an example. Now, this is the India map. And let’s assume these are the three availability zones we have already discussed in the previous chapters. Now, let’s assume you have application One over here and application Two in Availability Zone 3.

Now, these two applications are communicating with each other. Now, the overall bandwidth that is required for the overall data transfer that takes place is around 300 or 400 GB. Now, the problem is that the application is running quite slowly, and after doing a deep analysis, it was found that both applications were far away from each other, and due to the network latency, the application processing time was quite slower. And this is one of the real scenarios that we have gone through. So let’s understand this with a simple example so that you might get familiar. Let me ping Kplabs in. I’ll send two echo requests. Now, if you look into the time it took, it was around 88 milliseconds. Now, the reason why it is quite decent, not bad timing, is because the server, the web server, is hosted in Singapore. So Singapore is not quite far from India. So we are getting decent times over here. Now, let me ping some websites that are hosted far away from here. Let’s assume amazon.com.

Now, if you look into the latency time, it is 464-47-8502. So it is quite high, like five times higher than what we have seen. The location is undoubtedly the reason for the abundance of time. The server where KP lapsed is quite close to me. However, the server related to Amazon is located quite far away from me. And this is the reason why, when you architect the system, it is important to understand where your potential customers are, and then you architect your system accordingly. So, returning to the topic, having applications as close to each other as possible will reduce network time between them. So this is one of the reasons why Amazon recommends that you create a placement group. So this is a placement group, and the instances that you will be launching should be launched within the placement group. So if these two instances require high network throughput, then you launch them within the placement group. So I hope you got the basic concept. Now, one important thing to remember is that a placement group is only limited to an availability zone.

So you cannot have a placement group between availability zone one and availability zone two. This is not allowed because, again, the network latency will come into play here. So if you want a placement group, it should reside in a single availability zone. So let’s look at some of the important points to remember. The first important point is that we have to first create a placement group and then launch instances inside it. So it should not be that you first create the instances, then create a placement group, and then move those instances inside a placement group. So that is not allowed. So, this is a very important point. The second important point over here is that placement groups cannot span across multiple availability zones, and we discuss why they cannot span across multiple AZs. The third important point is that there are only specific instance types that can be launched inside a placement group.

Now, I hope you know that, or if you don’t, we will be discussing that not all the instances in AWS have a very high network throughput. Allow me to demonstrate this because it is critical that you understand this. Now, if you go into the AWS instance type, it starts at T2 nano as of now. So you have T2 nano and T2 micro. And you should also consider the network performance over here. So if you see that the network performance of T2 nano is low, then T2 micro is low to moderate. And if you go down, you have instances with high network performance, and then there are certain instances that have very high network performance, like 10 gigabits, 20 gigabits, et cetera. So, if you are launching instances in a placement group, it is assumed that you need very high network performance. And this is the reason why AWS will not allow you to launch instances like two nano and two micro in a placement group. Otherwise, the entire concept of placement groups falls short of its potential. So only certain instances can be launched inside a placement group. Now, you might be asking what types of instances will be launched, and this is actually part of the placement group documentation.

So AWS has laid out certain types of instances. “M for large,” “M for X large,” and so on. So these are the types of instances that can be launched within a placement group. Now, let me give you one of the tips that may be useful to you, because it has helped me design the hybrid architecture: generally, if you look into other providers, such as Digital Ocean, let me go a little lower, and if you see they have a very fast network, so they have around 40 Gbps of network that is available for the instances, that is available for the instances. Now, one of the important things to consider is that if you consider T2 micro or T2 nano, which have very low network performance, However, if you launch instances over here in DigitalOcean and even in Liner, even if you launch a $5 instance, which is the bare minimum, you will have a very, very fast network.

And this is very important to understand because in AWS, if you launch instances like t2 dot nano or t2 dot micro, the network performance is not very good. However, even if you launch instances with 512 MB of RAM, you will get quite decent network throughput that is much faster than AWS, based on my observations. So if you see a $5 instance that you will be launching, you will have quite a decent amount of network performance compared to AWS. So, coming back to the presentation, we discussed the three important points. Now, the next important point here is that we cannot move existing instances into a placement group. Now, if we want to move an existing instance, then what we need to do is create a snapshot or an AMI of that instance and then relaunch a new instance from that particular AMI.

So that is one possible way. And the last important point for this slide is that the maximum network throughput traffic between two instances in a placement group is limited by the slower of the two instances. So this is quite an important point. Let me actually show you this one. So let’s assume that you have launched two instances: input 10 x large and input 16 x large. This now has ten gigabit network connectivity, and this now has twenty gigabit network connectivity. So if you launch both of these instances in the same placement group, then the maximum performance that you will be getting will be the slower of these two instances, which will be ten gigabits. So even if you launch both of these instances, you will get the maximum network performance of 10 gigabits, which is the slower of these two instances. And this is what this PowerPoint’s last point meant: that the maximum network throughput traffic between two instances will be limited by the slower of the two instances.

And this is a very important point. And that is why, basically, AWS recommends that instances of the same type should be lost within the placement group. So you don’t have to worry about which network performance is worse in the instance type. There is so much theory. So, before we get to the last two points, let’s start with the practical session and see how we can create a placement group. So let me do one thing. Let me open a new Ignito tab over here, and I’ll open up the AWS management console. So they have come up with a new management console over here. I’ll just log in. The new management console is a very secure way of doing things, similar to how Gmail has done things. So let me show you. Now, generally, if you remember from the earlier console page, you used to have your username and password together. If your username and password are both present, attackers have a better or easier chance of launching a brute force attack. And this is the reason why AWS or Gmail have moved the password to a different screen. So let me just launch the instructors into Kplabs in.I’ll log in, and you see now that it asks me for the password. So, this is quite a good way of designing a login screen. Now let’s go ahead and open up the EC console.

So, if you see over here on the left, there is the tab for placement groups. So we need to create a placement group. Remember, the name that you will be giving to a placement group must be unique across your AWS account. So let me give the name “Kblabs” and I’ll click on “Create.” Now there is one bug that I find: whenever you click on Create, it does not immediately come up. You see, even if I refresh the page, it will not immediately come up.

So it takes some amount of time for this placement group to appear. Either the AWS should be showing the expected loading page, or it’s a bug, which is what I’m assuming right now. So once the placement group is created, we can launch EC2 instances within this placement group. So let me show you this particular aspect. I’ll click on Launch instance.I’ll select the instance type. Now let me select one of the instance types that is supported by the placement group. I’ll select “Information.” I click on next, and now, if you see, there is an option for a placement group over here, and I can select the placement group that is created. Now, the reason why it is not showing the placement group that we created is because it did not appear right now. It generally takes time to appear, so let’s try to select some instances that are not part of a placement group.

So let me try two mediums, and if I go to the next screen, you’ll see that the placement group option itself disappears. So you do not really have an option where you can select the placement group here. And this is one important point to remember. So we’ll just have to wait and see if the placement group is all right. So it has not appeared, so till the time it appears, let’s complete the PowerPoint presentations so we won’t have very long lectures, as suggested by a lot of students. So at this point, we have discussed that the name that we specify for a placement group must be unique across your AWS account. And the last point over here is that we cannot merge placement groups. Now, if you have instances in one placement group and instances in another placement group, and you want to have the instances communicate with each other at a very high performance rate as far as the network is concerned, you cannot really merge the two placement groups. You have to take a snapshot of one instance and relaunch that instance in the second placement group. So this is what this particular point means.

Now, I hope you understand the points we discussed, which are critical in terms of exams and even interview questions. So this is the reason why I emphasise this quite a lot. Now let’s see if AWS has managed to show us the placement group that we have created. Yes, it is there. So we now have a Kplabs placement group. Let’s do one thing: select the M-4large instance, which I launched but isn’t showing up. Okay, let me just start it again. I’ll say yes four times. So now it is showing KP-lapse as a placement group. I’ll click on review and launch, and I’ll launch this particular instance. So I don’t recommend launching this instance because it’s out of free tyres and you’ll have to pay for it anyway. So there’s only one thing I want to show you. That is why I just launched this particular instance—let me just name it in the placement group. The only thing that I wanted to show you over here is that if you go down, if you select the instance, and if you go down, you see the placement group option that you see over here, and it has KP lanes. So it is showing you which placement group this instance belongs to. Now if you just open up other instances in the placement group, it will show as “none.” So this is one way of identifying instances that are part of a placement group. So I hope this topic of placement groups has been understood by you.

72. Introduction to Docker

Hey everyone, and welcome back. In today’s video, we will be discussing the basics of Docker, the advantages that it offers, which really make it one of the great technologies available, and also doing a small demo before we conclude the video. So let’s go ahead and understand more about it. Now, before we go ahead and understand the advantages that Docker offers, let’s look into a typical software installation workflow. I’m sure that most of you have already experienced this with your workstation. Assume we have a Windows operating system and you want to install certain applications or a game. Now the first step is to download the installer.

So the installer would be a typical exe file. Now, once you download the installer, you go ahead and double-click on the exe file so that the installer starts to run. Now, many of you may have already encountered an error message stating that certain files or dependencies are missing, et cetera, in the middle of the installation. So you start to troubleshoot the issue so that you understand the fix to solve this specific error message. Now, once you troubleshoot it and once you install a specific dependency, you rerun the installer, and maybe you get another error that is not the same as the previous one. Now again, you go back to the same cycle of troubleshooting it and rerunning the installer. Now it’s going to remember this type of workflow you typically see, specifically when you used to install the GTA game. Now, it always used to say that there are certain DLLs that are missing and certain dependencies that are missing, and you had to go through this software workflow at least a time or two to make sure that you have all the dependencies and all the requirements to install the game. Now this workflow that we were discussing here is the end-user workflow. Now there is also a workflow related to the development. Now let’s say that this is a programmer who works very hard at developing software.

So he has developed specific software. Now, depending on the way in which he has created the software, it might or might not work out of the box in the operating system. So let’s say that he has created software through Visual Studio. So it may work directly in Windows, but it will not work in Linux or Mac. And this is a pain for the programmer because it is extremely difficult to make that software work in Linux or on the Mac. He might use Java. However, if he does not want to use Java out of the box, it becomes extremely difficult for the developer as well.

So what he can do is create the software and put it in a Docker container. Now he can deploy the Docker container in Windows, Linux, and Mac, and he knows for certain that the Docker software he has tested in his local environment is reliable. It will work flawlessly in Windows, Linux, and Mac, as well as all other operating systems supported by Docker. So this makes it much easier for the developer to push his software to a wide range of operating systems as well as all of the dependencies that this specific software requires. Because during the second slide we arediscussing that software installation typically fails due to dependency issues, et cetera. He can put all of these dependencies within the Docker container itself.

So once he deploys the Docker container to any operating system, it will work out of the box, and the end user does not really have to do anything other than run this specific Docker container. So with this, let’s understand what Docker is in definitive terms. So Docker is basically an open platform. Once we’ve created a Docker container, we can run it anywhere—on Windows, Linux, or Mac, whether on a laptop, in a data center, or in the cloud. Now, it follows the “build once, run anywhere” approach. So that basically means that once the software developer has built his own Docker container, he can run it anywhere. Depending on the test that he performed while creating the Docker container, he can run it in Windows, Linux, and Mac in a perfectly working condition. So this is the theoretical perspective.

Let me quickly give you a demo so that we have a good understanding of it. So for our demo purposes, we will be taking the example of NGINX. Now, NGINX is a very popular web server on the market. Now, we’ll consider NGINX to be this software, which has been written by an organization. Now, if you look into the technical specifications of NGINX under the supported distribution, it does support Amazon Linux. Then you have sent it to the SDPN Free PSD. Then you have Red Hat, Linux, Suzy Ubuntu, et cetera.

But it does not really support Windows. And this is the reason why I intentionally used this specific web server for our demo purposes. Now, we must ensure that if we try to install this in Windows, it will not work because there will be no exe file out of the box. So what you need to do is put this entire Ngenx within a Docker container, and once you’ve done that, you can deploy it in a Docker container on Windows, Linux, and Mac.

Now, again, Mac is not supported here, but if it is under the Docker container, It will work regardless of whether NGINX is supported out of the box. Now, the first thing that we’ll typically need to do is verify whether the NGINX Docker container is available or not. So this is the Docker Hub website. And here you see the NGINX container, which is available; this is the official image. So let me click over here, and if we go a bit down, you see this is the official NGINX image, and basically, here is the command to pull the NGINX image. And if you go a bit down, you will see various commands that are available with which you can run the specific image. So generally, there are two steps involved. Generally, when you use Docker, the first step is basically this specific Docker container image. It would be on a specific website.

Now, Docker Hub is one such website. So you must pull this image, or rather, download it within your operating system where you want to start the container from. Now, for that, you have the command for the Docker pull engine. Now, once the image is downloaded, you can go ahead and start the container from it. So let’s look into how exactly this would work. Now, let me copy this up, and from my Windows CLI, I’ll paste the docker pull NGINX command. So this command will go ahead and download the NGINX image from the Docker Hub repository. Now, do remember that before this command works, you need to make sure that you have the Docker daemon installed within your operating system. Now, Docker supports Windows, Linux, and Mac. And basically, if you see over here, I have the Docker daemon, which is running, and this is the reason why my docker command works perfectly. So let’s quickly wait for a moment for our image to get downloaded. All right, so our NGINX image has been successfully downloaded.

The next step is to launch the container from that image. So for that, I’ll write a simple command, “Docker run.” I’ll specify port mapping; let’s say 80, 80 to the 80 of the container. Let me specify the container image, which is NGINX. So it has given us this long output over here. But if we quickly do a Docker PS, then you should see that there is an NGINX container that has started up. Now, in order to verify that I am in Ignito mode, I’ll put 1270-0188. And now you see, it’s basically showing me the index.dot.html page of NGINX. So that means my NGINX is perfectly working in the Docker container, and I am able to access that within my Windows browser. So this is the great advantage of Docker containers. You don’t need to be concerned about whether it is supported or whether the application is supported in Windows. or whether the application is supported by Linux. You can run it in Windows, Linux, Mac, or any other operating system that supports the official Docker daemon as long as you put it inside a Docker container.

So with this, I hope you have a high-level overview of what Docker is all about. Now, one great thing—in fact, two advantages—we just saw was that NGINX was only supported in the Docker container, not in Windows. It is perfectly working, and second, we did not have to worry about the installation of NGINX and taking care of all of the dependencies, et cetera. All we did was pull this specific Docker container of NGINX, and that container itself had all the required dependencies to run the NGINX web server. Once we pulled it, we went ahead and started the Docker container, and that’s about it. And this is the reason why Docker as a technology has become extremely popular. And typically, if you look into the roles of DevOps SREs, having a good understanding of Docker is a mandatory thing that a lot of organisations are looking for. And even as a software developer, primarily because of the advantages that it offers, a lot of organisations are also mandating that the software developer also have at least basic knowledge about Docker.

73. Elastic Container Registry (ECR)

Hey everyone, and welcome back. In today’s video, we will be discussing the Elastic Container Registry. Now, in definitive terms, Amazon’s Elastic Container Registry, which is also referred to as “Easier,” is a fully managed Docker container registry that allows us to store,  manage, and deploy Docker container images. The development of code is the simplest example for which I can provide an easier explanation. So generally, if you’re working in an organisation and you’re writing code, what do you do? You commit that code to a central location. So that central location could be a Git repository or an SVN repository. Now, what happens after you commit code?

So, in two instances, the DevOps team would pull the code, build it, and then push it to the EC. Similarly, instead of putting the code in the git repository, if you are putting it inside a Docker container, let’s say you are building a Docker container. Now, you cannot really push the Docker container images to git. You need a Docker registry. So that Docker registry can be anything. It can be a Docker hub, it can be your own private registry, or you can even make use of the Elastic container registry. So you create your Docker image, push it to the ECR, and then the DevOps team or another team can pull your container from the ECR and push it to the relevant EC, two instances. Now, ideally, what used to happen earlier was that organisations used to manage their own private container registries. Now, the problem with that is that you need to take care of high availability, scalability, security, et cetera. Now, ECR takes the burden and provides you with high availability, scalability, and security via IAM, as well as the ability to host your Docker container images in a centralised location.

So let’s go ahead and understand ECR and how we can push the Docker images to a central ECR repository. So this is the ECR console. If you want to access the ECR console, go to Services and simply type ECR, which will take you to the ECR console. Now, once you are over here, you can go ahead and click on Get Started. And here it will give you the option to configure your first ECR repository. So let me give it the name of my example. So keep that repository name in mind; the DNS will be static, and you will need to enter it here. So I’ll give the repository a name, and I’ll click on Create repository. So, once the repository has been successfully created, you will notice that there are no images present. Now, in order to understand how we can push the images to the ECR, what we’ll do for our demo is make use of a simple EC2 instance that I have running.

And this EC2 instance has an IAM role, which is associated. In addition, the IAMrole contains a managed policy called Amazon EC2 Container Registry Full Access. So this is the policy that I have attached to the IAM role, and we’ll make use of these two instances to push the Docker image to the ECR. So what I have done is logged into the EC2 instance, and here I have a directory called My Apache. So let’s quickly log into the directory, and within this is a file called the Docker file. Now if you quickly do a nano on the Docker file, Now this Docker file has a series of instructions, which are present over here. The first is that it will get the Ubuntu 16.4 image, then it will run the app gate update, and it will install Apache. It will do an echo of “Hello World” on the index HTML, and it will run a certain set of commands.

Then it will expose port 80, and then it will run the Apache run underscore Apache script. So this is the basic idea behind that. So now what we’ll do is quickly do a Docker build; I’ll call it My Apache; and we’ll put this directory. So now Docker will basically take the instructions, which are present within the Docker file, and build the image from them. In fact, we won’t go ahead and understand them in detail, but we do have a fantastic Docker course coming up where we will. But I hope, in a high-level overview, you understand what the docker build command is doing. Great. So the build has been completed successfully, and you will see that it has been successfully tagged in My Apache later.

Now, if you quickly run the docker images command, you will see that you do have an image of Ubuntu 16.04. A docker build has now created a new image called My Apache with the tag “latest.” So now what we’ll do is run a container from this specific image to see if it actually works or not. So let’s do a Docker run. The port will be specified by Hyphen d, followed by My Apache. Great. So now if you do a quick Docker PS, you should see your container up and running. Now the container is listening on port 80 of the host, and it is forwarding through port 80 of the container. So, to confirm what we did, let’s copy the IP address of the ECto instance and paste it into the browser; you should see Hello World over here. Great. So now we can confirm that our application is working perfectly. The next step is to upload this image to a centralised repository. Create two instances if your Jenkins wants to pull it and push it to the EC. Or if you want to run that application in CI-style, you will have to push it to a central repository.

Now, in our case, that central repository would be ECR. Now, in order for you to push ECR, there are certain commands that you need to understand. So basically, what I have done is create a text document that contains all the commands that we’ll be typing. We’ve already looked at docker build and docker run. If you want to push it to the ECR, the first thing you’ll need to do is log in. So I’ll just copy and paste this command here. So this would basically give you your Docker login command, and it would log you in. If you see this, this is the account number. So I’ll just copy it, paste it, and hit Enter. And it says here that the login was successful. Great. So this is the first thing. Now, the next important thing is that you will have to tag your image. So basically, what really happens is that currently, if you see that you have the images, if you try to push them, they try to go into the Docker hub. So you have to tag the image because you want to push it to your own private ECR repository.

So in order to do that, you have to run the command docker tag. You must enter My Apache over here, followed by the tag name you wish to associate with it. So for that, I’ll go to my ECR console. And currently, if you see this, this is the Uri. So we can copy this URI, and I’ll paste it over here once I press Enter. Now, if you do a Docker image search, you should see that there is one more listing that is available. Now, if you see this, both of them have the same image ID. So this image serves as an alias or pointer to the My Apache image. All right, now the next thing that you need to do is push this specific image to the ECR container repository. Now, in order to do that, you have to do a Docker push. Now, let’s copy this up and press Enter. So it will now proceed to push the image to the ECR repository. Great. so it has already pushed it. So let’s quickly verify. If I go inside the repository, you should see that there is one image with the tag “latest,” and this is the image Uri, and it was pushed at this specific time. The size is 95. So this is how you can go ahead and configure the ECR and push your images to the ECR repository. So I hope this video has been informative for you, and I look forward to seeing the next video.

74. Overview of ECS

Hey everyone, and welcome back. In today’s video, we will be discussing the Elastic Container Service, which is also referred to as ECS. Now, ECS is a container orchestration service that allows us to run Docker containers on AWS. Now, with container orchestration, it does not really mean that. It just allows us to run the Docker containers on AWS. It includes numerous features such as the Docker container health check, Docker container networking, and many more. One of the great features of ECS is the ability to run containers without provisioning any servers in your infrastructure using the AWS Fargate feature.

Now, Fargate is an amazing feature, and a lot of organisations are actually moving to Fargate because now if you do not really have to maintain the servers, you do not really care about the operating system hardening, vulnerability management, patch management, and various others. So AWS takes care of those aspects. So that is another great feature of ECS. Now, the high-level overview of the workings of ECS can be understood with this diagram. All credit goes to the AWS documentation for that. So it can be better explained here. So on the left-hand side, you have the Elastic Container Registry. I hope you remember from the earlier videos that we discussed how we can push the image to the ECR. So you have that container image, which is lying over here. Now the ECS will pull the image from the ECR, and depending upon your task definition and the container definition, basically depending upon the way you define your application, the ECS will take the container image and launch it accordingly. Now, there are two major options for launching.

 EC-2 is one, and Forget is another. And depending upon that, you will be able to manage the containers at the end. So this is the high-level overview of the ECS. Again, this could be better explained directly in terms of the practical. So let’s jump into the practical and look at how we can configure our first ECS cluster. So I’m in my AWS management console. So the first thing we’ll do is go to the services menu and search for ECS. So this will take you to the Amazon Container Services console, where you have three services that will be listed. The first is the ECS, which is Kubernetes, and the third is the ECR, which is the container repository. Now, ECR is something that we had already configured earlier. Since we are speaking about ECS,we’ll go to the ECS service.Let’s go to the clusters here. And currently there are no clusters available over here. The first thing we need to do is create a task definition. So let’s click on “create a new task definition.” There are now two options available. One is Fargate, and the second is EC. Two are now here.

If you look into the description within the Fargate, it says that AWS-managed infrastructure has no Amazon EC2 instances to manage, and for EC2, it says that self-managed infrastructure uses Amazon EC2 instances. So if you look at that slide here, whenever we define our application, it can be based on EC2 or it can be based on Fargate. If you select Forget, then it will not launch an easy instance within your EC console. So you don’t have to manage the security, hardening, vulnerability assessment, patching, and so on. However, we’ll start simple. We’ll select Easy here, and we’ll click on Next. So here you’ll have to give the task definition a name. Let me call it my definition. Let’s go a bit down. You can specify the task size in terms of memory and CPU. We will ignore this. We are more interested in the container definition. Let’s click on “add container.” And here you will have to specify the image of your container. Because when ECS creates the infrastructure, it will definitely want your container image to run your application. So here you need to define the image URL. So let’s click on Cancel and then on Services.

Let’s type ECR because we already had a simple “Hello, World” Apache image within our ECR. Let’s click on the repository name here, and I’ll copy the Uri. Let’s go back. Let’s go. To add a container, I’ll type the Uri here, and the container name I’ll say is “my container” or, let’s say, “my Apache Container.” Here you can set the memory limit as well. Allow me to set the memory constraints for Fightwell and port mapping. I’ll put 80 in. If you recall from the ECR video, we used the run docker run command with a port mapping of 80. The same thing is applied here. Now, you can also do a health check and various other things. We’ll keep it simple. There are hundreds of different ways in which you can configure But since this is the initialdemo, we’ll keep it simple.I’ll click on “Add.” Now that I’ve completed that, we can proceed to Create. Great. So our task definition has been created successfully. Now, the next thing that we need to do is create a cluster. We’ll go to the cluster, and we’ll click on Create Cluster.

So there are various templates that are available. One is based on Fargate. It says “powered by Fargate.” The second option is the EC to Linux plus networking. And second is EC for Windows Plus Networking. So let’s select EC to Linux plus networking here, and I’ll click on the next step. So here, let’s give it a cluster name, which I’ll call my cluster demo. Now, within the provisioning model, I’ll keep it as “on demand,” the EC two instance type. I’ll choose our favourite for the demo, which is the T-2 micro, which also falls under “free tire.” Let’s keep the number of instances at two. You can even keep it as one. In your case, the EBS storage The EBS storage needs to be at least 22 as a minimum.You cannot go less than that. Let’s try and put in 15; you’ll see that it gives you an error. So you need to put 22 keys on the keyboard. I’m going to use the DP hyphen key.

Now, within the networking, it has this template and the option for creating a new VPC. Or you can select the default. The default starts with 172.31 CIDR. I’ll just select the default VPC. The subnet. I’ll select the subnets that are available within the VPC security group. You can either create a security group or use the existing security group over here. Let’s leave it at default. I’ll go ahead and create a cluster. So this will go ahead and create a cluster. It will take a certain amount of time for this to be created. Let’s quickly wait for a moment. Great. So our cluster creation is complete. So let’s click on the View cluster, and you’ll see the status is “active.” Now we can create a service, but we’ll start from scratch. Let’s go to tasks, and we’ll click on “Run a new task.” Now the launch type You have a choice between Fargate and EC 2. I’ll select the EC, and within the task definition, it will give you the task definition that we have created. Now, before we go ahead and run the task, let me quickly show you. As you can see, the ECS has launched two EC-2 instances of type T-2 micro. Now these two EC2 instances will have the ECS agent, which is running as part of the Docker container. Now, if you quickly look into the security group, it only has one rule allowed.

So let’s do one thing. Let’s go to the security group, and I’ll allow one more rule. Basically, I wanted to show you that specific agent before we started. So for testing purposes, I’ll just add port 22. Let’s go back to the EC-2 instance. I’ll select one, I’ll copy the public IP, and I’ll quickly log in. Great. So let’s quickly do a pseudo-Su. And now, if you do a Docker PS, you will see that there is an ECS agent that is running as a container within the two EC instances. So this is the default thing that comes when you go with the EC2 “two instances” type setup. So, coming back to our Ecsdap, let’s go ahead. We have already selected the task definition. Now we have the task placement, which is availability, zone balance, and spread. This is great. We can proceed with the task. So here it says that the task creation has been completed successfully. So now, if you go to the clusters, you should see that there is one running task. It also shows you the CPU utilisation, memory utilization, and container instances. Now, if you just click on the cluster here and go to Task, you should see that there is one task that is running Wire. Now, if you click on the task here within the EC instance ID, it shows you only one instance ID over here. However, we had configured two instances.

Now, in order to verify if our container image of Apache got pushed in both instances, let’s click on one. I’ll put it in my browser, and as expected, you are able to see Hello World. Now let’s click on another one. I’ll copy the IP, and if you see over here, it is saying “Connection Refused.” That basically means that the image has not been pushed. So you can quickly verify that with the Docker PS. Now, within one EC2 instance, you see that there is only a single ECS agent that is up and running. Let’s log out from here and quickly log into the second EC2 instance. And here, if you quickly do a Docker PS, you should see that there is an ECS agent that is running, and there is also one more agent that is running, which is ECS Hyphen. My definition is an Apache container. So this is the container that runs our Apache web server, which returns Hello World in the browser. Now, the question is why it did not get deployed to both of the instances. The tabs of Services now respond to that.

Now, if you want to run a service like a web server or any long-running service like your application, you need to create a service. Here, you cannot create a task. If you create a task, it will not get deployed across all the instances. That’s one thing. And tasks are basically used for short-run jobs, like if you want to perform some kind of database migration. So for those short-running things, tasks are generally used for long-running processes, which are generally deployed across all the EC2 instances. You need to go ahead and create a service. Now, the second major difference between a task and the services is that if the task container fails for some reason, ECS will not bring it up. Now, in case your service container fails—let’s say your Docker container is running Apache under Services—ECS will automatically bring it up. So that is a great feature of ECS. Again, that is one of the differences between the services and tasks. Anyways, in today’s video, I just wanted to share with you the overview of the ACS cluster as well as the difference between the services and tasks in a high-level overview.

75. ECS – Task Definition and Services

Hey everyone, and welcome back. Now, in the earlier video, we discussed the basics of ECS. We had created our first task definition, we had created our first cluster, and we had deployed our first task inside EC to instances associated with the cluster.

Now, along with that, we also discussed the high-level differences between the tasks and services. So currently, we do have one task that is up and running in one of the two EC instances. And if I quickly do a Docker PS, in one of the instances, you will see that you have the Apache container up and running. One of the distinctions between tasks and services that we also discussed is that tasks are generally similar to batch jobs or short-running jobs that you may want to deploy. Services are long-running applications or long-running services like Apache, et cetera. Now, if the task or the containers associated with it stop working for any reason, ECS will not restart them automatically. So let’s look at what I mean by that. Let’s put a stop to this specific container. So I’ll do a Docker stop, and I’ll give the container’s name. Great. As a result, this container has come to a halt. So let’s quickly verify with Docker PS. And you should see that you only have one ECS agent that is up and running. So once the container that is associated with the task stops or there are some issues with it, the ECS will not automatically start it. All right, so this is what the task is all about. So, because this is a web server, similar to Apache, we wanted it to be up and running at all times.

So it is not quite suitable for the task. So we’ll go ahead and terminate the task. All right, so this is how you can stop the task. And now we’ll go ahead and create our own service. So let’s go to the Services tab, and we’ll click on Create. So the launch type would be easy for the cluster. Is my cluster demo the service name I’ll use for Apache, and will we use Daemon here? So there are two service types: replica and DMN. We’ll go with DemonMode and the Minimum Healthy Person. Let’s put it at 50% deployment type. You do have an option for rolling as well as blue and green. And we’ll proceed by clicking Next. So here, it will ask whether you want a load balancer. Ideally, since we have two instances, if you have it in production, you would need a load balancer. but we’ll keep it simple. I’ll simply mark it as “disable service discovery.” I’ll deselect it. We do not really need it right now. For a demo, we’ll click on “next.” Let’s click “next,” and this will give you a review. We’ll proceed with the creation of a service. Great. So now the service has been created. Let’s go ahead and view it. So now you see that the service is running and that there are two instances of it.

So this can be better understood. Let’s go back to our cluster. And now you have the desired task of two within the services. You have two running tasks. Let’s click over here. Within the details, you don’t have the load balancer, and within tasks, you should see two tasks because we have two instances that are up and running. So now, coming back to our CLI, if I quickly do a docker PS, you should see that our Apache image is up and running now. So we’ll try to stop this specific Docker container once more. Great. So our Docker container is now stopped. However, a Docker PS shows that the Docker container is now operational. So the ECS will automatically restart Docker containers that have failed. So this is the high-level overview of these services and the task. I hope this video has been informative for you, and I look forward to seeing the next video.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!