116. AWS Config – Part 02
Hi everyone, and welcome back to the Knowledge for the video series. So, continuing our journey today, we’ll be speaking about Elastic Beanstalk. Now, since the past year, Elastic Beanstalk has gained a good amount of popularity, specifically among startups, because Elastic Beanstalk makes it quite easy to launch a new infrastructure in terms of point and click. So it is something similar to an orchestration platform. So let’s do one thing. Let’s understand Elastic Beanstalk with the help of a simple use case.
Now, we have a use case where there is a need to deploy a simple Hello World application in an EC to an instance with an ELB. So this is a very simple use case in which there is a requirement for an easy to instance under an ELB. And that EC2 instance will host a straightforward Hello World application. So how will we achieve this use case in a traditional way? So, in the traditional manner, we will create an EC2 instance. Once the EC2 instance is created, then we have to modify the security group and other things. And then we log into the EC2 instance. We install a web server, Apache NGINX, and then we install the application dependencies. As a result of application dependencies I mean that if your Hello World application is based on Java, then you need to install Java packages. If it is based on Python, then you have to install Python packages.
So these things need to be done. Once you install the web server and the relevant application dependencies, you go ahead and configure your application on the server. So you upload your application files to the server, put them in the right directory, make sure the permissions are correct, and once everything seems to be working fine, you create a new ELB, configure the health check, and once it’s done, you point it to the right EC instance. So this is definitely not a very difficult task. However, doing all of these things is difficult for startups or small organizations that lack a dedicated solutions architect or a DevOps team. I’ve seen a lot of startups; I used to get emails like this because all they have are developers; they can’t afford a DevOps guy, a dedicated DevOps guy. So their developers create their own infrastructure. And when you create the infrastructure in a nontraditional way, it really becomes challenging for them.
And this is the reason why Elastic Beanstalk really helps you. So what happens in a beanstalk way is that you create an elastic beanstalk environment with the correct platform, okay? And once you do that, you just do a point-and-click, and we’ll be able to deploy the entire application. So it is a very, very simple step. So let’s go ahead and do the practical session. So this part will become much more clear. So I am logging into my EC2 console; let’s open up the beanstalk. Perfect. So this is the elastic beanstalk console. So you’ll see over here that there are only two steps that you have to do. First, you have to select a platform. So platform will be determined by the platform in which you wrote your code, such as PHP, Python, Java, and so on. Once you select a platform, you upload your application as a zip file and run it. Okay, only three simple steps. So let’s try it out. So I’ll click on “Get started,” and it asks for an application name. I’ll say the application name is KP Labs. Now I have to select a platform.
So there are various platforms that are available. So what really happens when you select a platform? Let’s assume I select PHP, and then Elastic Beanstalk, when it creates an instance, will automatically install all the relevant PHP-based packages on your server. So you don’t really have to do it on your own. Once you select the platform, you have to upload your code. So you can click here and select upload, and you can just upload your code in the zip file wire. This is one method; another is to use a sample application. So we’ll use a sample application in our case, and once you select the sample application, just click on Create application. So Elastic Beanstalk will now create an easy-to-use instance, a security group, upload the sample application to the server, and configure everything for you. So you don’t really have to do anything. This is how simple it is. As a result, this takes some time. So the “create environment” call has started to work. So it might take around five to six minutes for the code to get deployed.
So if you see over here now, it is creating a security group. Now once the security group is created, it can go ahead and create an EC2 instance as well. Once an EC2 instance is created, it will install the packages inside the EC2 instance. So now the security group is configured. Now you have elastic IP configured so that this can be accessible over the internet. And then the EC and instance will be connected, and the EIP will be associated with the EC and instance. So let’s do one thing: let’s pause it. I’ll pause the video for a while, and I’ll come back once this process is complete. Okay, so it took a few minutes, and now our Elastic Beanstalk application is ready. So now what you see over here is that once your Beanstalk application is ready, it will give you a public URL. So if you’ll just open it up over here, as you see, it will show you your sample application.
So we just had to go through three steps, and Elastic Beanstalk actually did everything for us. Now, if you will see over here, there are some interesting settings. One is the log. If you want to see the logs on the server, you can do that. It has undoubtedly created a simple instance behind the scenes. Let me just show you. You can log into the instance and get the logs. But Beanstalk does things in a much simpler way. As you can see, it has established a KP lab on the environment. So this is basically our “Beanstalk” environment name. And if you click on “request logs,” I’ll say “full logs,” and it is fetching the logs from the server, it will display you the logs. So let’s just wait. Let me show you. So you have the HDTBD. So it has configured the Apache web server for you. These are the EB activity logs. Let me just show you. So this will actually show you what exactly the elastic beanstalk has done for you behind the scenes. So it is installing packages like AWS logs, something related to Cloud Watch as well, Cloud Watch logs, and many other things.
Anyway, coming back to the topic, let’s go to the PPT and see what exactly the elastic Beanstalk has done for us. First things first, it created a security group. Then it created an elastic IP address. Once this was created, it launched the EC to create an instance for us. Once the EC2 instance was launched, it configured our platform environment within it, such as installing the AWS CLI, the Cloud Watch plugin, the Apache web server, PHP-related packages, and so on. Once the platform environment is configured, it uploads your application. In my case, it was sampled. But if you clicked on the upload button and uploaded your zip file, it would upload it. It will unzip it, and it will configure your application. And at the end, it will give you a public endpoint where you will find your application running. So just in three simple steps It does everything for us. And this is why developers really love it. because they don’t really have to do much of our technical stuff. All they have to do is click three buttons, and the entire environment is created for them. So.
117. Elastic Beanstalk
Hey everyone, and welcome to Part 2 of AWS Config. In the previous lecture, we discussed the fundamentals of AWS configuration and how AWS configuration can assist us in tracking infrastructure changes. So today we’ll look at some more AWS Config features, including one very cool feature called compliance check with the configuration provided. So let’s understand what that means. So again, just infrastructure-related changes. Monitoring is not enough. As a security specialist, we should be monitoring the security aspect as well. So there are various use cases related to the best security practices. Root MFA, like all S buckets, should be enabled. Now, security groups should not have traffic on port 22 or maybe on another port like 3306, et cetera. The cloud trail must be enabled. Now, one more rule: no unused EIP should be present.
So this can be part of the cost factor as well. So these are the five points related to security, as well as cost optimization, which are important. Now, how do you actually monitor all of these things? Now, this is just a sample file. There can be hundreds of different points. There should be some kind of centralised dashboard that can say that your account is compliant with all of these rules. And this is something that AWS configuration enables us to do. So again, based on the use case that you configure, AWS Config can show you the compliance status. This is the compliance status that is currently restricting SSH. You see, it is compliant. So SSH is restricted. It’s not open to 0, 0, 0. All the EIPs are attached. So that means there is no unused EIP. You have. Root MFA is enabled. So usually, it is compliant. However, there are certain resources that are non-compliant here. So, simply by looking at the guidelines, you can determine whether or not your infrastructure is compliant. And generally, if the auditor comes, you can directly show the auditor this page, provided you have all the greens over here.
So this is what AWS Config allows us to do. So let’s look at how we can configure these rules. So, going back, let’s go to the AWS configuration. Now these are the resources in inventory. Look at the first tab over here, which says rules. And by default, Amazon gives us a lot of rules that we can use within our infrastructure. So, for timing, there are 32 rules that are included by default in your configuration data. So these rules basically check IAM, EC2’s two instances, root MFA, S3’s three buckets, and so on. So let’s do one thing: let’s enable certain rules out here. Let me enable the EC’s detailed monitoring. Okay, let me enable this particular rule so it can be evaluated. Let’s add a few more rules over here. Let’s see; let’s go to the next part. Okay, three bucket logging is enabled. Three-bucket versioning is enabled. As a result, we want versioning to be enabled in all S3 buckets. So I’ll click on “Save.” I’ll add this particular rule as well. I’ll click on “add rule.” Let’s add a few more rules. Let’s see. Cloud Trail is enabled. This is again a very important rule that should be there. So I’ll add this rule. Let me add a few more rules so that our dashboard looks pretty nice.
Let me go to the next step. EIP attached. Again, this is very important because specifically for free tires, if you don’t have an EIP that is attached to the EC2, you will be charged for that EIP. So, one thing to keep in mind is that this should be present in your at least aidless free tyre usage. A lot of people get charged because they don’t have EIP attached to any EC2 instances. So just remember that you should have an EIP attached. I’ll click on “Save.” So we have around four rules here, and you see, it is showing me the compliance as well as the non-compliance status. So simple to illustrate detailed monitoring, it is stating that it is a non-compliant error and that three resources are non-compliant. Three-bucket versioning is enabled. Again, there are two non-compliant resources. Cloud trailing is enabled. Yes, we have a cloud trail enabled. So it shows me as compliant, and it will also tell me whether or not the EIP is attached. So this is one of the ways in which you can configure the rules of AWS configuration.
Now, again, as we discussed, there are around 32 default rules that comes.Now, what happens if you want to add more rules? You can, of course, add more rules. You can put those rules in lambda, and you can connect those rules with a configuration service. So here you see that there is one EIP that is not attached. Okay, this is dangerous because I will be charged for this particular unused EIP. So I should be removing the EIP, and you should be doing the same if you have an EIP that is not retired. So there is one non-compliant resource that you see: I have four EIPS, among which there is one that is non-compliant. So let me go to this particular EIP. Okay? So this is the EIP. Let me actually go to the EC2 and elasticIPS and paste the EIP, and you will see that this EIP is not attached to any of the instances. So why keep it? Just release it. You will save the cost as well, and I’ll release this particular EIP. So this is the basic information about the AWS configuration. Now, there is one more important thing that you should remember. We already discussed the CIS benchmark, and there is a very nice GitHub repository that contains a lot of AWS configuration rules that you should have within your AWS account. Specifically, if you’re running the production servers, security is something that’s important to you.
So if you go to the rules.md file over here, this file basically tells you what the rules are that are present within this particular GitHub repository. So you see, there are a lot of rules that are present related to the IAM password, policy, key rotation, whether the IAM user has MFA enabled or not, whether VPCflow log is enabled, and so many other things. So there are around 34 rules that are present over here, and there are around 32 rules that are present by default within the AWS Config repository. So, as long as AWS keeps updating this rule set, you can add the rules or, for the time being, it does not update. You can write your own rules within the lambda function. So that’s the fundamentals of the Configuration service. I hope this has been useful for you, and I will really encourage you to practise this once. And, if you are in charge of an organization’s AWS infrastructure, I highly recommend that you have some kind of dashboard that shows you whether or not you are compliant with all of the resources. So this is it. I hope this has been useful for you, and I’d like to thank you for viewing.
118. Elastic Beanstalk – Part 02
Hey everyone, and welcome back. In today’s video, we will be discussing the Elastic Beanstalk deployment policy. Now, this particular video is pretty important for exams, and now and then, AWS keeps on asking certain questions related to the EB deployment policies. So let’s go ahead and understand more about this. Now, whenever you deploy a new version or new updates to your Elastic Beanstalk application, you must deploy it. Now, there are several options within which you can go ahead and perform your deployment. The first is now all at once, the second is rolling, the third is rolling with an extra batch, and the fourth is immutable.
So we must first understand what each of them is and for what use case each of them is required. So within the all-at-once approach, what happens is that whenever you deploy a newer version, that newer version will be deployed to all the instances simultaneously. So, if your Elastic Beanstalk environment has two instances, the newer version will be deployed in both instances simultaneously. Now, during that period of time when the newer version has been deployed, the instances are in out-of-service mode while the deployment is ongoing. Now, if the update fails due to a certain reason, you will have to roll back the changes by redeploying the older version of the code. Now, one of the disadvantages of this approach is that since the instances are out of service until the new deployment is complete, your website would typically be down. And this is the reason why this approach is generally not preferred one.
Now, in the second deployment model, which is also referred to as rolling Deployment model: what happens is that the newer version of your application is deployed in terms of batches. So each batch of instances is taken out of service while deployment takes place. So in contrast to the all-in-one model, where an update is pushed to all the instances, what happens in the rolling-deployment model is that one instance, for example, is taken out for a period of time, a newer version is deployed there, and then it is put back in service.
So a newer version is not deployed to all of the instances. So if that newer version has certain bugs, the chances are that it will not really affect your entire user base under this type of policy. Now, one of the disadvantages is that since the newer version is deployed in batches, the overall capacity in terms of servers will be reduced while the deployment is happening.
So let’s assume that you have a capacity of three servers and that you are making a deployment right now. So that means one server will be taken out of service until the deployment happens. So the overall capacity reduces during that period of time. And thus, it is not really recommended for applications that are critical in nature or for applications where performance really matters. Now, in order to overcome that, there is one more deployment model that is rolling with an additional batch. What happens here is that Elastic beanstalk launches an additional batch of instances.
So you have three instances right now within the elastic beanstalk. So during the deployment, the elastic Beanstalk will not touch these three instances. It will launch an additional batch of instances, and it will deploy them there. As a result, the full capacity is always met during the deployment by rolling with additional batch types of deployment models. Now, the fourth one is an immutable deployment policy. Now, what happens here is that it deploys a newer version of the application on completely new servers under a new auto-scaling group.
Now, when new instances pass the health check, they are moved to the older auto scaling group, and the older instances are terminated. Now, the impact of the failed update is less because if an update happens to fail, you just delete the new auto-scaling group and the EC2 instance associated with it. So this is a preferred option for mission-critical production systems. So with this, let’s quickly jump to Elastic Beanstalk, and we’ll look into what exactly it might look like. So I have an elastic beanstalk environment. So let me quickly get started, and I’ll name it Kplab’s hyphen deployment. The platform for the sample will be PHP, and the application code will be manually uploaded.
So I have something called an “EB sample app.” So what I’ll do is quickly add it to the archive, name it as a zip, and let me rename it PHP version 1, and then save it. Now, I’ll select the PHP version and click on upload. So once it is uploaded, I’ll go to configure more options, and we’ll go ahead with low-cost tire, and the EC2 instance type should be t2 micro, because that is the only one that comes under free tire, and we can go ahead and create an app.
All right, so the environment has been successfully deployed, and if I quickly go to this environment here, you will see a sample PHP-based page. Now, let’s do one thing. What I’ll do is let me quickly open the index.php file. I’ll open up my atom editor, and within this I’ll go a little down, and there is an h1 header here. I’ll just change it to “congratulations, version two.” I’ll go ahead and save the changes. Now I’ll do a zip yet again, and I’ll name it PHP version two. And this time, if I click on “upload and deploy,” it basically asks me to upload the application. Now, before this really happens, there is one important aspect that we need to see. So if you look into the configuration page, there is an option for rolling updates and deployments.
Now, if I click on modify over here, the deployment policy over here is all at once. If I expand this, you only see that it is all at once and that it is immutable. Now the question is why there is not really a rolling deployment policy or one with additional batches. Now, the reason why this is not present is because when you do a rolling deployment, that means one instance is deployed at a time. And since in this lower configuration we only have one instance, we do not really have the full options for the deployment policy volume. And this is the reason why the deployment policy type here is all at once. So since we have not changed anything, I’ll do a cancel, and within the dashboard, I’ll do an upload and deploy. I’ll choose a file here, I’ll do a PHP version two, and I’ll go ahead and deploy the changes. Because this is all happening at once, if there were two or three instances, this specific PHP version two would have been deployed in all two or three instances. And the caveat here is that if this newer version had a bug, your entire website would have gotten an error in an instant of time.And this is the disadvantage of an “all at once” deployment model. But I hope you understood the basics of the elastic beanstalk deployment model. And this is it. About this video: I hope this has been informative for you, and I look forward to seeing the next video.
119. EB Deployment Policy
Hey everyone, and welcome back. In today’s video, we will continue with our understanding of the EB deployment policies, and we’ll look into some more. Now, in the previous video, we had already deployed the elastic beanstalk environment, and if you look into the URL, you see that it is the congratulations version two page that we had created. Now this environment was based on an “all at once” policy. So now if you look into the EC2 instance overall in the Ohio region, you will see that there is one EC2 instance that has been launched, and this is what is serving the traffic. Now let’s do one thing; let’s go ahead and understand more about the immutable policy that elastic beanstalk offers. Now, within the immutable deployment policy, instead of deploying the newer version of your application on the existing servers, this policy will create its own set of servers where the newer version will be deployed.
Now, within that new set of servers, after they are launched and the health check is passed, the elastic beanstalk will assume that the update has been successfully deployed and they will be moved into the permanent auto-scaling group. So although theoretically it might look a little challenging to understand, So let us do one thing: let us do something practical so that we can understand it much better. So, returning to the EC2 console, because we already have an EC2 environment, if I quickly navigate to the auto scaling group, you will see that one auto scaling group has been created. And within this auto-scaling group in the activity history, there is one instance. So this is the instance that is currently running. Now, within the scaling policies, it basically says that there are no policies and your auto-scaling group is configured to maintain a fixed number of instances. This is actually coming from the elastic beanstalk, where the environment type within the capacity is single instance, and thus there is a single instance over here.
So the first thing that we will do is change the deployment policy. As a result, the deployment policies will change it to immutable all at once. Once done, I’ll go ahead and click on “apply.” So whenever you do a practical on these aspects, specifically the immutable deployment policy, it will take a lot of time, so just be patient. So throughout the video, I’ll be pausing during the stages where the update is happening. So it does not go quite as long. Perfect. So the environment update has been completed successfully. So now we have moved from the all-at-once deployment model to the immutable deployment model. So what I have done is create one more PHP version, version three. This time it will be “congratulations version three. So that is the only change that I have made. So now what we’ll do is go ahead and deploy the third version of our application, and while it’s getting deployed, we’ll look into how exactly the infrastructure is changed in this type of deployment model. So I’ll go ahead and click on “upload” and “deploy.” I’ll choose a file; I’ll select PHP version three. I’ll go ahead and I’ll deploy it.
So within the recent events, you will see that an environment update is starting, and it says that immutable deployment policies are enabled. This is as expected, and it is launching one instance with new settings to verify the health. So basically, the PHP version three that we have uploaded will be deployed in the new instance that will be launched. Along with that, there will be a temporary autoscaling group that will be created, and that new instance will be attached to that autoscaling group. So let’s quickly verify that. So if I click on “refresh” over here, you will see that there are two auto-scaling groups that are created. Now, within the instances tab, you will see that there is one more instance that is getting created. Now, just to quickly verify this instance, ID starts from F-9, and the earlier one starts from FD. So just to remember which instance is new because both of them have the same names,
Now, the next important part that we need to remember is that if you look into this message, the environment health check has transitioned from OK to “information application update is in process” on one instance. So the new instance that we have launched, the application update, is only installed on that new instance; it is not installed on the older instance right away. Once that is completed, Elastic Beanstalk will basically verify that the new instance passed the health check. So if you’ll see, this is the newer instance, FD. Now that the newer update has been released, Elastic Beanstalk will verify that the update passes the health checks that were created. So, following the passing of the health check update, the new instance, which was created in a temporary auto scaling group, is now moved to the permanent auto scaling group. So if I just quickly do a refresh, there are two auto-scaling groups; one is the Immutable. So this was the temporary auto-scaling group where the instance was created. It has now moved that instance to a permanent auto-scaling group over here, after it was created over here.
So this is the new instance that is being attached. Now, once the deployment is successful, what will happen is that Elastic Beanstalk will terminate the older instances. So if you’ll see over here, the older instance, which ends with F-9, is now shutting down primarily because you have the latest version of the application that is working, which is passing the health check, and it has now been deployed to the auto-scaling group and everything is working perfectly fine. Now, in case you have an auto-scaling group with multiple instances, we only have a single instance over here. However, if you have an auto-scaling group with multiple instances, a similar type of process will be followed. Now, one thing to keep in mind is that if you’re wondering why that newly created instance was moved from a temporary group to a permanent group, it’s because since we only have one instance for our environment, what would happen is that since that older, newer instance, which elastic beanstalk created, successfully passed the health check.
So now what would happen is that it would move from the temporary auto-scaling group where that check was happening to the permanent auto-scaling group here. Now, once it is in the permanent auto-scaling group, the older instance that was running will be terminated. So if you remember, the older instance was F nine.So this instance was running when it got terminated. And now you have the newer instance here, which has already successfully passed the health check, and now it is serving the traffic. And now, if you will see, the environment update has completed successfully. So this is how the immutable deployment policy really works. So I hope you understand why there are two autoscaling groups that are created, why a new instance from a temporary autoscaling group moves to the permanent autoscaling group, and why the older instance was terminated. So that’s about it. About the immutable deployment policy Again, you don’t really have to understand this in depth as far as the exam perspective is concerned, but we are doing the practise so that the overall understanding in terms of the real world is also understood.
120. Performing Immutable Policy based Deployments
Hey everyone, and welcome back. In today’s video, we will be discussing the “blue and green” deployments. Now in this type of deployment model, there are two environments. The first is known as the “blue environment,” and the second as the “green environment.” Now, the blue environment is your existing environment, where your application is deployed and receiving production traffic. So that is referred to as the “blue environment.” Now, the green environment is basically a parallel environment that is running the new updated version of your application, and it is running in parallel. This can be illustrated with an a diagram, which shows two environments.
One is blue, and the second is green blue. All the traffic is currently going to Blue, and it is connected to the database warrior. Now, the deployment part basically means routing the production traffic from the blue environment to the green environment. So, in this model, it’s similar to immutable deployments in that you’re not deleting the infrastructure, so the existing infrastructure isn’t changed. All you’re doing is switching from your existing stack to the new parallel stack where your new application is deployed. As a result, if the traffic changes from red to green, it is connected to the back end database. So here’s where the switch happens. Now, how quickly do you want to switch? Really depends. When it comes to services like ElasticBeanstalk, where we simply swap the DNS, it’s an all-or-nothing proposition. So let me give you an example. So let’s say that this is a beanstock environment and you directed all the traffic to the green environment and something failed wire.
So then all the users will be receiving those failed messages. It’s all or nothing. So that is what Elastic Beanstalk’s deployment model uses. However, a lot of organizations make use of the weighted DNS. Assume you direct 90% of traffic to your blue environment, and instead of directing all traffic to green, you direct 90% of traffic to blue and 10% to green. This is something that you can dowhich is much more better approach. Now, in such environments where a DNS change occurs, one important factor that needs to be remembered is the time to live. Because if you set a time limit of 300 seconds, the DNS client will not query the DNS server for 300 seconds. Along with the time to live, you also need to be sure about the route 53 propagation time. And there may be some misbehaving clients who continue to send the request to the Blue environment.
So this is typically the issue if you are using the DNS alteration method. Now, there are various ways in which we can achieve the blue-green deployment model. One way is to update the DNS routing via route 53. You can also swap the auto-scaling group behind your elastic load balancer and swap the launch configuration. Again, this is not a recommended way. You have a swap beanstalk environment, which is specifically for EB environments. You also have options for cloning the stack if you’re reusing the ops work and then updating the DNS. Now, from what I’ve seen, a lot of organizations do what they do because they have a load balancer. So typically, if you go with the load balancer approach, you do not really have to worry about this TTLOR route 53, propagation time, or any misbehaving DNS clients because the load balancer DNS remains the same. The only thing that changes are the instances that are under the load balancer.
So for this kind of approach, you don’t really have to worry about the DNS-related changes. And it is for this reason that many organisations prefer it. So let me quickly give you a demo. We’ll have a demo related to the beanstalk environment. So I’m currently in my elastic beanstalk. And if you see, there are two environments over here. So, first and foremost, there is the environment. So it does not have a version here. And second is the new environment, which is version 1. So, basically, version one is the green environment over here, where we deployed the newer version of the application. So let’s go to the first environment here. And if we look into the URL, you can see it is coming up with a basic congratulations page. Now let’s do one thing. Let’s copy this URL, and I’ll quickly do Look up here, and it is returning the IP address, which starts with 18. Now, let’s assume that we want to do a blue-green deployment model. So the first thing that we’ll typically do is launch the green environment. In our case, the green environment is already launched. However, we want to perform the actual deployment here.
So if you look into the definition of “deployment,” it basically means routing the production traffic from a blue to a green environment. Now, for Elastic Beanstalk, the blue-green deployment can be achieved with the help of swapping the environment URLs. So what would happen here is that I would show you, but first let me show you exactly what a green environment looks like. So this is the green environment. And if I open this up, you’ll see that this is the basic NGINX page. Now, both environments look a little different. I have intentionally used this so that it is easy to identify that this is blue and this is green. Now let’s do one thing. Since our green environment is ready, let’s do aswap environment URLs here and we’ll click on swap.As a result of this approach, your DNS remains unchanged.
So the DNS remains the same. So you don’t have to worry about the DNS being changed. The only difference would be the back-end IP addresses or back-end C names. So now let’s execute a NS lookup again, and you see. You are now assigned the IP address 35.169. When we did the NS lookup in the previous approach, it returned the IP address 18 2 1 5. Now, in order to verify, we can just refresh the page, and we should see the NGINX default page. In the event that this deployment fails or things do not work as expected, you can swap environment URLs. Let’s quickly do a swap. And now what will happen? Again, the environment URLs will be swapped, and your DNS will again point to the blue environment over here. So, if I quickly refresh, or even if I refresh over here, the problem is, well, let me show you. The problem here is that the DNS cache is still present. We were already discussing that. This is an important issue: the DNS timeout may appear here, and whatever cache is present, it will still appear.
So this would typically happen for a few seconds. So we’ll have to quickly wait here. Great. So now the PHP application has started to load, and as always, it might take a certain amount of time, specifically if you are going with the DNS-based approach here. One thing to keep in mind is that, while there are numerous methods for achieving blue-green, not all of them are recommended. Swapping the launch configuration again is not the best way to go about it. Now, along with that, one important part to remember for the blue-green deployment is to make sure that, in the event you are releasing a green environment, your new updated application does not do any major schema changes. Since this green environment is performing the schema change on the database, if you switch from green to blue because of an issue with the green environment, the blue environment will have to deal with the new schema change. And since this is running an older version of your application, it might break again.