96. Creating the first Code Deploy configuration
Hey everyone, and welcome back. In today’s video, we will be doing the Code Deploy configuration from scratch. So basically, there are certain steps that are needed in order to configure Code Deploy. The first step is to create the IAM role with S 3 and EC 2 permissions.
So this will be the code to deploy based on my role. The second step would be to install code and deploy an agent. The third step is to configure the code to deploy the application. So let’s go ahead and understand each one of them. So I’m in my code-deploy console. However, the first thing that we need to do is create an IAM role. So let me quickly go to the IAM service, and within the roles, we’ll click on Create Role. And this time, this role will be based on Code Deploy. So you can choose Code Deploy from this menu.
Now it is asking to select the use case. So we’ll be selecting the first one, which is Code Deploy. I’ll go ahead and select it, and I’ll click on Next. Now, by default, you will have the AWS Code Deploy Role Policy, which is attached. I’ll just leave it at default for the time being. We’ll click on “Next,” and we’ll do a review. Now, I’ll name this Code Deployment Demo, and we can go ahead and create a role. So now, within the roles, you have the Code DeployDemo Role, which is created and has one permission. What you need to do now is add another policy. Let’s quickly attach a three-read-only policy over here. You’ll come to know why in the later stages. All right? So this is the first step that you basically need to take. Now, the second step is to install the Code Deploy Agent. Now, this Code Deploy Agent basically gets installed in the EC2 instance. So you’ll need to set up an EC2 instance. Let me quickly show you.
In my case, I have already launched an EC2 instance. So this is the EC-2 instance, which is Kplab’s heifer demo one. It’s of the “T” variety (two dots per micron). So it comes under “free tire.” And this EC2 instance is linked to an imrole called Kplabs. So again, if I just quickly open this, I am Roll. You’ll notice it also has Amazon S3 read-only access. So it’s a very simple step, and I’m confident you’ll be able to complete it. So once you have the EC2 instance up and running, the next thing that you have to do is install the Code Deploy Agent. The Code Deploy Agent now largely depends on the operating system in which it is installed. Now, within the documentation, I’ll be posting this link below the video. So you can go directly here. Now you have documentation for Amazon, Linux, or Red Hat. for Ubuntu or Windows Server.
So it will depend on which OS you are installing it on. In my case, I will install it in Amazon Linux. So I’ll be clicking on the first option, and if you go a bit down, it is basically giving you a set of commands that you need to run. Now you don’t really need to do a YAM update over here. You can directly go ahead and run the second command, which is Yum install Ruby. Let me try it out, and it says that Ruby is already installed. That’s great. Then you have Yum installed, and the WGET and WGET packages are also installed, and then it basically asks you to do a WGET on the install file. So this is basically a script that will allow you to install the Code Deploy Agent. Now, if you see over here, this is specifically a variable that you need to fill in. This is determined by the region from which you install it. In my case, basically, this is like a variable, and you have to fill it.
And if you basically look into the documentation, it already says that, for example, in the US East Ohio region, replace the bucket name with the AWS code deploy US East 2. So let me just copy this up, I’ll replace this, and I’ll do a WKIT. So basically, it will install the file, and if I do a LS, this is basically the script. Now if you quickly do a nano on the script file, basically, you will be able to see what exactly is happening behind the scenes. So once you have done this, you need to set the execute permission on the install script, and you need to execute this specific script with install spaceauto. So, once you do this, it will basically download the Code Deploy Agent into your EC2 instance. So once it is done, you can quickly verify if the Code Deploy Agent is running or not. So if you run a service code deployment agent status, it will show that the AWS Code DeployAgent is running on PID 3678.
Perfect. So now we have the CodeDeploy Agent, which is running perfectly. Now the third step is to configure the Code Deploy application. So now we can go to the Code Deploy page. So now I’m on my Code Deploy page, and within the Getting Started section, you can click on Create Application. So let me name this KP Labs CodeDeploy, and the compute platform would be EC2 OnPremise. You also have options for Lambda and ECS, and I’ll go ahead and create an application. Perfect. So now that the application is created, the first thing that you have to do is create a deployment group. So let me create a deployment group, and within the deployment group, you have to give it a name. Let me refer to the KP Labs deployment group. And then you have to specify the service role.
So this service role is basically the code deployment role that we had created in the first step. In my case, it is code. Deploy the Hyphen demo. Now the next thing is the deployment type. It may be present as well as blue-green. So we’ll just keep it simple. As of now, we’ll select in place, and the current environment configuration is the Amazon EC2 two instances. Now that you have selected this, you have to specify which EC2 instances you want to add as part of the environment configuration. Now, all I want to do is deploy it in a single EC2 instance. So I’ll select the tag key as “name” and the value as “Kplab’s hyphenated demo one.” This is the easy-to-remember instance where we had installed our code-deploy agent in the previous step. Once you have done that, you can go a bit deeper. You can just deselect the load balancer over here. And within the deployment settings, you have various deployment options like “one at a time,” “half at a time,” and “all at once.” We’ll just leave it at default, and we can go ahead and create a deployment group. Perfect. So our deployment group is now created. Now the next part is to actually create a deployment. Because we have created our code deployment application, we have configured the deployment group.
Now it’s time for the deployment. So we can click on “Create deployment” here. After that, it will basically tell you with which deployment group it is associated. In this case, it is the KP Labs deployment group, and it is telling you the type of storage where your application is stored, and you have to specify it in here. So, basically, this is the zip file containing the YML app specification. Now, if you see over here, I have an ASPEC YML and a script. So basically, during deployment, there would be a specific way in which your application gets configured. So it would be different for different kinds of applications, and the logic is defined within the app spec YML file. So let me quickly open this up, and if you look into the app specification YML, all we are saying is that first it should execute this specific script, which is pullbuild sh, and then it should execute a script called runbuild sh.
Now this is in the location called “scripts.” If you quickly open this up, you’ll notice that I have a directory called Scripts, and within that directory, I have runbuildsh and the pulled build sh. Now the first thing that we are doing is pulling a build.sh, and if you quickly open this up once again and it just contains one line, what it is doing is copying the binary that code build would be storing within a specific s.3 bucket. So during a code build video, we stored the binary in this specific location. So within the pulled build SS, what we are doing is pulling the latest version of the build from the S3 bucket and storing it under the temp directory. So this is the first step, and in the second step, all we are doing is running the build command. Now, within the run build, we are putting an execute permission, which is chmod plus x ontemp kplabs, and we are running or executing the KP labs and whatever output it has, because it is a binary. We’ll be storing it under the temporary kplabs.txt file. So this is the run sheet. Now that you’ve finished this, zip both of these files together. So if you want to quickly zip in Windows, I can just click both of these files.
I’ll right-click here, I’ll do “add to archive,” and you have to select the zip file over here, or you can just give the name. It is KPI ads that have and repo one in my case. However, if you simply type KPLI ado or any other name, you can proceed. Okay. And this is basically the zip file. So you must upload this zip file to a specific S3 bucket, and you must specify it within the revised location. So if I quickly open up the three buckets, let me show you this: So I have two buckets. This is the bucket that we will be putting our zip file into, which is kplabs/heifer/deploy artifacts. Now, I already have the zip file over here, but let’s do one thing. Let’s upload the zip file that we just created. This is basically the zip file, which is kplabs heiferrepo zip. I’ll go ahead and upload it. Once this is uploaded, you basically have to give the location of this specific zip file.
So I’ll say “three.” Then you have to give the location of the bucket, which is kplabs:deploy artifacts. I’ll say kplabs deploy artifacts, and then you have to specify the zip file, which is kplabs report zip. All right. So code Deploy will now fetch the unreported kplabs, zip them, extract them, and whatever other information is present within the app (SPECT YML). The code that will deploy the agent will follow those instructions. Now, within the revision file type, it automatically detects that it is a zip. Now that you have done that, you can go ahead and create a deployment. Now before we do that, let me quickly open up the terminal. You will see that there are no files present within the temp directory.
So once we have this basic configuration, we can go ahead and click on “Create Deployment.” Now within the Create Deployment window, if you see the status “currently created,” it will soon change to “in progress,” and you will be able to see your deployment status within this specific bar. So currently, it says that the deployment has been successful. Now in our case, if the deployment has been successful, we should have the files within the temp directory. Now, if you do LS, you will see that I have these specific files. One is the app spec YMLand second is the script directory. Now, within the abstract YML, we told it to execute the kplabs, and the output would be in the kplabs hyphen log TXT. And if you typically open this, you should see “Hello, world.” And this is working as expected. So this is a high-level overview of how to configure and deploy the code from the ground up. So again, I’ll be posting these file links after this video so that you can do the same practical. I hope this video has been informative for you, and I look forward to seeing you in the next video.
97. Overview of Code Pipeline
Hey everyone, and welcome back. In today’s video, we will be discussing the AWS code pipeline service. Typically, in previous videos, we discussed AWS code commit, code build, and code deploy, and if you saw it, it was more of a manual process.
So it’s like as soon as I do a commit in AWScode’s git repository, I have to run the code build. After running code build and storing the artefacts in a specific three-bucket location, I configure and run code deployment again. If someone modifies the code commit again, they have to run the code build and then code deploy once again. So all of this is more of a manual process. What you want is for everything to be automated so that as soon as a developer commits the code in code commit, the code build will run and do all of the testing and building for you. Once code build completes, it should store the artefacts in a specific S3 bucket, and then code deploy should run, which would deploy the latest build to all the EC2 instances. And code pipeline enables us to do this entire collaboration—you see, the entire collaboration between code commit, code build, and code deploy. So, let me quickly show you how the code pipeline looks, as this will help us understand it a lot better.
So I’m in my code pipeline console, and if you see, I have one pipeline that I created called the demo Hyphen pipeline. Now, within the demo pipeline, there are three stages, which are present over here. One is the source stage, the second is the build stage, and the third is the deploy stage. And, as expected, the source stage consists of code commit build stages, code build and deploy stages, and code deploy over here. So basically, you can even have various other options, like for the source page, your repo might be on GitHub, so you can add a GitHub over here. For building, you might not necessarily use code build; you might have a Jenkins instance, so you can integrate Jenkins over here, and you have the deploy stage over here. So this entire automation is part of the code pipeline service. So, basically, when a developer makes a commit, say, to an AWS code commit, in our case, the demo pipeline is automatically invoked and all three stages, which are shown here, are executed. So let’s do one thing; let’s try it out.
So I’ll go to my repositories within the AWS code commit, and I have a repository called the Kplabs code pipeline, which is integrated with the pipeline that I have created for the demo. Now, whenever a developer makes a change in this repository, the code pipeline will run all three stages that are present over here. So this essentially makes continuous delivery possible. So let’s do one thing. Let’s add a file over here; let’s create a file. I’ll say this is a demo file, and I’ll call it Demo TXT. The author name I’ll say is Z, and the email address I’ll say is instructors at Kplabs in.So I’ll go ahead and commit the changes over here. Great. So now you see it says the demoTXT has been committed to the master. Now the code pipeline would detect that there has been a change to the code commit repository, and this source stage would be initiated in just a moment.
So let’s quickly wait here. And now you can see it’s happening. So now, once the source stage completes, the code pipeline would move to the second stage, which is the build stage, where the actual build will happen. Following the build and the storage of all artefacts in S3, you can proceed to the deploy stage, where code deploy will push those artefacts to the relevant EC2 instances. So let’s quickly wait for the source stage to be complete. And now you see that it has succeeded. And now that it has succeeded, the code-building stage has begun.
So, once again, the build stage would be brief. So let’s quickly wait here. Great. So now the building stage has also been completed successfully. Now comes the deployment stage. So, during the deploy stage, your code deployment will take the artefacts created by the build and push them to the two EC2 instances. If the build stage fails over here, the third stage will not run because this is essentially all three stages that are required for continuous delivery, and the code pipeline automates the entire pipeline. And now you see that even the deployment stage has succeeded. And within the EC, for instance, your deployment build would have been pushed. So this is a great service. And in fact, if you are using code commit, code build, and code deploy, or even if you’re reusing GitHub or Jenkins for builds, this code pipeline will really make things easier because it does all the automation right out of the box for you.
So now that we have seen the demo, let’s go back to the theoretical aspect and understand the three stages. The first one is that the AWS code pipeline would automatically get triggered whenever there is a commit that is being made to your source repository. So as soon as there is a code change that has been made, it will trigger the pipeline to run. Now after the pipeline gets run, the first stage is the source stage, and the output, the output of whatever is present within the repository, gets stored in the S3 bucket. From here, you can proceed to the second stage, which is the build stage. And the output of the build stage is again stored in the three buckets, and the output of that acts as an input to the third stage, which is the deployment stage. And the code deploy, which is running in the deployment, The deployment stage will push whatever build is present to the destination. simple to illustrate.
98. Introduction to Message Brokers
Hey everyone, and welcome back. So today we’ll be speaking about message brokers. Now this is a very important topic, specifically when it comes to scalable systems. So let’s go ahead and understand the need for a message broker. So we took a sample use case.
So the use case is a resume-to-text converter. So you may have noticed in such applications—specifically, if you are on LinkedIn and upload your resume, LinkedIn will convert or extract the text information from it and show you that as soon as you upload your resume. Similarly, there are numerous applications that allow you to upload a file and have that application extract text information from a PDF or document file. and it will show you. Similarly, what we have is a resume to text converter, where application XYZ is meant for extracting the values from a resume and showing them in a textual format. Now it also says that there are two components to this application. The document gatherer comes first, followed by the document converter. Further, it says each component belongs on a separate server to distribute the load. Now, a very simple use case Actually, let me do one thing. I’ll demonstrate one such website for converting PDF to text.
So what really happens here is that you upload a document file. Allow me to upload one of the PDF files, and it is currently converting that file. And now, if you click on “download all,” you will get a zip file. So if I open up the zip file, what you will see is that the document has been converted into textual information. So whatever document was present in a PDF is now converted into text information. So this is a very simple application, but there are some complex applications that will do the formatting as well. But just to emphasise that, this is the application that we are speaking about. So this is the document gatherer, and this is the document converter. So the document gatherer will do the same thing you see on the website.
So on the front end, you have a section over here, and in the front end, you have a button called “upload files.” This is called a document gatherer, which will basically allow the user to upload the document. And there is a second application behind the scenes, which is a document converter. So you have PDF to Doc, PDF to Docs, PDF to Text, and PDF to JPG. So you have a document converter that can convert between formats. Okay, so there are two components involved. This is the first component, and this is the second component. Now, as soon as the document gatherer takes the document, it will send it to the document converter. The document converter will convert it to a specified type and then send back the document to the user. Now one of the challenges with this kind of approach is: what if the document or one of the components fails? Let’s assume a document converter fails. Now, even if the user uploads the file, it will not be able to be sent to the document converter. So this is a challenge. So let’s look into a few of the challenges.
One such challenge is that, due to the popularity of the application and huge traffic spike, company B Lab has decided to scale out horizontally and add two converters. So you have one gatherer, and you have two converters. So what would really happen is that in the configuration file of this application, you would specify two endpoints. The first endpoint is for converter 1, and the second endpoint is for converter 2. Now, let’s assume that there is a huge traffic spike and the company has decided to launch two more converters to handle the load. So now what would really happen is that once you add two more converters, you have to edit the configuration file of the document gatherer to add the endpoint of the two new servers that were launched.
And after 1 hour, let’s assume one of the document converters went down due to some reason. So again, after this server went down, you had to change the configuration file of the document gatherer and remove the IP address or host name of this specific machine that went down. So these are some of the challenges that might be faced in a real-world environment. So this is not a valid, or, I would say, a properly valid solution, and it would lead to scalability issues. And in order to solve this particular aspect, you have a new way of doing it through a message broker. So what a message broker does—let’s assume this is a message broker—is Now, what would really happen is that whenever a document gatherer receives a document from the user, it will send it to a message broker, and the document will be stored in one of these blocks. So these blocks are called queues.
So it will be sent into these queues, and there are document converters that will connect to this queue and fetch the documents. So, looking at the introductory part, one of the main functions of the message broker is to take messages from the publisher. As a result, this becomes the publisher’s property and is forwarded to the consumer. So this is a consumer product. So any application that publishes the message to the queue or to the message broker is called a publisher, and any application that receives the message from the specific queue is called a consumer. Now, this is called a message broker, and the broker stores this message internally inside these specific queues. So in the message broker, there are queues in which the messages are basically stored. Now, in this kind of approach, there are a few important features that a message broker provides. We’ll talk about two important features.
There are a lot of them. But just to simplify things, right now we’ll talk about two. First is the guarantee of delivery, and second is an orderly delivery. So when we talk about guaranteeing delivery, that basically means that whenever a publisher publishes a message and it goes to the message broker’s queue, the message broker will make sure that the message that is received by the publisher will not be lost. So definitely, the question comes up: what happens if the message gets lost in the message broker queue? And the answer is that if it is properly implemented and if the protocols are properly followed, a message will not be lost. That is one feature. The second feature is the orderly delivery.
Now, this is also very important. As an example, suppose you send messages one and two. Now, when the consumer fetches the message from the queue, it should receive message one first. Then it should receive message two first in terms of sequential order. And that is the second important feature of a message broker. So this is something that you’ll have to remember. Don’t worry, we’ll be doing message broker practise sessions, so everything will become much clearer. So last, there are two important things that I would like you to remember: tightly coupled systems and loosely coupled systems. So when we talk about a “tightly coupled system,” it is a component of a system architecture in which both applications are linked together and are dependent on each other. So what this really means is that if this document converter goes down, then the entire application will break.
Okay? So that means they are hardly dependent on each other. So I’ll tell you one of the use cases: in one of the organisations that I used to work with, we had a tightly coupled system. So any time we had around 20 servers and any one went down, then the entire website would go down. And that would be a real pain because maintaining 20 components 24/7 is really a challenge. And if one component goes down due to some issue, then the entire website is down. Because most organizations’ systems are tightly coupled, this is one of the challenges they face. Okay? So the next thing is a loosely coupled system. So what happens in a loosely coupled system is that it is a component of a system architecture that can process the information without being directly connected. So I would say message brokers are an example of a loosely coupled system.
So what would happen in this case is that this is a publisher, and this is a consumer. So they are not directly connected to each other. That is one aspect. The second aspect is that the message will be sent to the message broker by the document gatherer or publisher, and the consumer will connect to the message broker and receive those messages. Now let’s assume that the consumer one went down. Then you don’t really have to change the configurationfile in the configuration file of the publisher. The only endpoint that you have to specify is that of the queue or the message queue. You don’t have to specify the IP address of the consumers who will be receiving the message, and you can always add more and more consumers. So you can add consumers based on auto-scaling. So let’s assume that there are a lot of messages in this specific message broker. Then, depending upon the load of this message broker, you can dynamically increase the size of the consumers. And all the consumer has to do is connect to the message broker where the message is stored and retrieve it.
99. Revising SQS
Hey everyone, and welcome back. Now, in the first lecture, we learned the fundamentals of message brokering and performed a practical demonstration of how it would actually work. Now, many of you might ask, “What happens if this message broker goes down?” Now, this is a real challenge because if the message broker goes down, no matter if you have, say, 510 consumers, if the message broker itself is not there, then the entire flow will be lost.
Now, this message broker is basically an application. Let me show you. There is a very famous application called Rabbit NQ, which basically acts like a message broker. Configuring is now very simple. So you have to install Rabbit MQ on a specific server, and then you have to put the endpoint of this in both the publisher and the consumer. Now, if the server on which the Rabbit MQ is installed fails, the entire flow is lost. So this is one aspect, and this is the reason why in today’s lecture we’ll study about Amazon SQS.
So, let’s start. Amazon SQS is fast, reliable, and scalable. Not only that, but it is also a fully managed messaging queue service. and this is very important. So the entire part related to managing the message broker—making it highly available and making sure that it has enough RAM and CPU to receive and send messages—you don’t really have to worry about all those configurations. AWS SQS will do all those things for you. And being fully managed really means that, as a solutions architect, I don’t really have to worry about managing SQS or worrying about it going down. So this is quite important. The second point to mention is that AWS SQS simplifies things while also being very cost-effective. And since this is a message broker, it allows us to decouple the components of a specific application. Perfect. So just one slide for this specific lecture. Let’s do something practical so that things become much clearer to us.
So I’ll go to the SQS console. Now, I have one SQS queue that was already available. Let’s go ahead and create one more queue, so things will become much more clear. So I’ll click on “Create a new queue.” I’ll name the queue, let’s say, KP Labs hyphen Demo. So there are two types of queues that are available. Let’s select Standard for now, and we’ll be talking about FIFO in the upcoming lectures. Once you select this, click on the Quick Create queue. And now you will see our queue has been created. So, quite simply, it hardly took us five or 10 seconds to do that. Now, as soon as the queue gets created, if you look over here, it will give us the endpoint URL. And this is the endpoint URL that we have to configure in the publisher so that it can publish messages to that endpoint. And it also has to be configured on the consumer so that it can connect to that endpoint URL and receive messages. Okay? So let’s do one thing. Let me click on Queue Actions and select Send Message. So this is the message that we can send. This is quite simple. We can send messages from the console itself.
So I’ll say this is our first practical session for SQS. So I’ll write this down and I’ll click on “send message.” Now I’ll click on Close, and in the messages available, you will find that one message is available. So in order to check what messages are available, go to Queue Actions and click on “View Delete Message.” And I’ll click on “start polling.” So if you want to, because messages are stored in this specific queue Now, if you want to read a message, you have to fetch the message. And that fetching is called as Polling.So by default, you will not be able to find anything. So if you want to fetch the message from the queue, you have to click on “Start Polling for Messages.” As a result, it is polling for messages. And this is our message, which we sent to the SQS queue. Now, in order to fetch this specific message from a server, or I would say from an application, you can either do it via AWS CLI or from the AWS SDK, or if you’re using Python, then boto is something for you. So let’s do one thing; we’ll try it via CLI, so things will become much more clear to us. So AWS SQS CLI is quite good in terms of doing most of the operations. So let’s do one thing. We’ll do two things. We’ll do a list queue operation, and we’ll do a receive message operation.
So the syntax is AWS SQS list queues followed by “it is also asking for the queue name.” So if you don’t really have the queue name, you can just run the AWS SQS List Queues command and it will show you all the queues that are available in a specific region. So let’s try it out. So, this is our server; let me show you around. So this is our server, which we are connected to, and it has the Im role, which is KP Labs. And within this role, I have given Amazon SQS full access. So it will be able to send messages, it will be able to delete queues, and it will be able to do everything. So we’ll do AWS SQS and SQS list queues. So this is the command, and you also have to give the region. So again, in AWS, there are a lot of regions that are available. So I’ll refer to the region as “US East One,” because this is where my SQS queues are created. And now, as you will see, it will give me the queue URL. So these are the two endpoints of the queues that are available in this specific region. Now, I want to receive a message from this specific queue. How do I do that? So let’s try that out. So. AWS, SQS. Now we’ll go to the receive message CLI command.
So you have both AWS and SQS. Then you have the receiving message. So this is the message that was received. And then you have to put the Q URL over here. Okay, so this is quite simple. Let’s try this out. You have AWS and SQS. Message queue receipt URL. And we’ll put the URL of this KP Labs hyphen demo in the queue. and let me press enter. And there seems to be some—let me just quickly verify. I just verified the spelling. Let’s try this out. AWS SQS. Received a hyphen message regarding civ. Yeah, here it goes. So we forgot to add a hyphen. Hyphen. So if you look at the CLI, you have to add “hyphen” and “hyphen” over here. Now, let me clear the screen and let me press Enter. Okay, it is asking us for the region. So you can specify the region as “US East.” And now, if you do, you will find that the command has extracted the message from the SQS queue. Perfect. Now, there’s one more thing I wanted to show you.
If you look over here in the first queue, we have five messages that are available. So let’s try this out. Let’s try to pull the messages from the KP Labs configuration queue. So I’ll copy the queue URL, and I’ll replace the name with the queue name, and let me press Enter. And you will see that it is showing me message three. If I do one more, there will be five messages available at the same time. By default, it will only fetch one message. So, if I do it again, you’ll notice that this is message four. So, first and foremost, messages three and four. If I do it one more time, then this is message two. So every time I run, this is it; it will fetch one message. Definitely, this can be modified, and you can fetch all the messages at once as well. So this is it for this lecture. Go ahead, and I will really encourage you to try this out once. And in the next lecture, we’ll go ahead and understand SQS and the related configurations in more detail.
100. Visibility Timeout in SQS
Hey everyone, and welcome back. Now in today’s lecture, we will be speaking about the visibility timeout. Now, this is a very important configuration parameter as far as the message brokers are concerned. So let’s do one thing: let’s go ahead and understand visibility timeout in much more detail. So going with our classic examples that we have been discussing for the past few lectures, you have a publisher, you have a message broker, and you have consumers. So now let’s assume the publisher has published a message. It has sent a message to a message broker, and that message is M 1. This message is now in the queue of a message broker and consumer. In order to fetch the message, it will pull this specific queue, and it will fetch the message. So now message one is received by the consumer, and after the message gets received, the message gets deleted from the queue.
So this is the basic case. This consumer is currently processing the message, and during that time, some issues occurred, causing the consumer to go offline. Now, what really happened was that there is a problem. The problem is the message Mone fetched from the consumer. The consumer was not able to process the message. The message is lost, and the consumer is down. So this specific message is now lost. So this is the reason why message brokers really do not delete the message after it is fetched from the consumers. So, let’s take a look at how the approach works today. Now, whenever a consumer receives a message from a queue, the message still remains in the queue and is not deleted. And this is something that we have already seen. So we have a message M that the publisher has sent to the queue. The consumer received that message, M. And instead of deleting the message, what happens is that message M1 is hidden. As a result, it remains hidden for a set period of time. And in that time, if the consumer goes down and does not acknowledge the queue within a specific interval of time, what will really happen is that the queue will again make the message visible.
And now, whenever the second consumer pulls the message broker, it will send the single message to the second consumer. After consumer two processes this message, it will send a delete call to delete this message. So this is how things really work. Now the question is for how long the message should be hidden, and that is specified by the visible timeout parameter. So let’s do something practical to make this much clearer to us. So I have a KP Labs demo, and if you’ll see over here, I have one message that is available. So we have our server. I’ll run the command, which is “AWS SQS receive message.” This is something that we have already seen in the earlier lecture, and I’ll press Enter. So now what is happening is that it has received this specific message. So now we are in the process of ensuring that this one message has been received by the consumer. Now that SQS has arrived, SQS will conceal this message. And as you can see now, the messages available will go back to zero. Now, if I pull again, since the message is hidden, I will not get any value over here. So the question is how long the message will remain hidden. And the answer to this is in the SQS queue configuration itself.
So if you go into the configure queue, the default visibility timeout is 30 seconds. So what this basically means is that as soon as the consumer fetches the message, the message will remain in the hidden state for 30 seconds. After 30 seconds, if the consumer does not send a delete call for that specific message, the message will again come back to the available state. So now, as I had not sent a delete call for this specific message, the SQS will again make that message available, thinking that something happened to the consumer, it went down, and the process did not complete. So if I press Enter again here, let’s do it one more time, and you see that the message is now available. Again, because I retrieved the message, it will transition from the available to the in flight state. Okay? And it will be zero now that the message is available. So this is the basic information about the SQS visibility timeout. I hope you understand the fundamentals of visibility timeouts. Now, ideally, the visibility timeout should be configured depending upon the time it takes for the consumers to process a specific message. So if the consumer in your environment takes 63 seconds or 120 seconds to process a message, then you have to set the visibility timeout to 120 seconds.