111. OpsWorks LifeCycle Events
Hey everyone, and welcome back. In today’s video, we will be discussing the OpsWorks lifecycle events. Now, Ops Works has basically five events that occur during the entire event lifecycle. Now, these events are set up, configured, deployed, undeployed, and shut down. Now, each of these events can be associated with a specific recipe from a cookbook, and those recipes would be executed when these events are running. So, before we go ahead and understand more about this, let me show you exactly how it might look in OpsWorks. So I’m in my OpsWorks dashboard. Now let me click on the sample stack that we created, and I’ll select the layer associated with Nodes. If you go to recipes within this node JS layer, you’ll notice that there are various events available. So these are set up, configured, deployed, undeployed, and shut down. So let’s say that during the setup stage you want to install certain packages. Then you’ll have to specify the recipes over here. Now, in this node JS, you only have the recipe associated with the node JS demo.
So if I quickly do an editorial, let’s understand it better. So, these are the various events now associated with the “deploy” event. You’ve completed the node JS underscore demo recipe. Now, we can understand this if I can quickly download it. Let me download this. Great, so it has been downloaded, and if I quickly open this up, we can see over here. So, during the deployment, you have the Nodes underscore demo. So you have this Nodes underscore Demo directory within this compressed file. So this is the recipe that will be configured during the deploy stage. Now, let’s say that during the setup stage you want to install certain packages, maybe Apache or Ha proxy. Then you can create a recipe here, and you can specify the name of the recipe over here. As a result, you can say Apache. The event will then investigate the repository URL, the compressed file, and the Apache directory. And from there, it will go ahead and do the things that you have mentioned within the recipe. So this is what the events are.
Now, it is important for us to understand what each of these events means and at what stage of the life cycle they occur. So let’s go ahead and understand that. The Setup event is typically triggered after the started instance has completed booting up and is primarily used for the initial installation of software packages. So let’s say you want to install PHP and Apache. Then you can specify those recipes within the setup layer. Because you may want to push your application during the deploy layer. It might be an artefact in the S-3 location. So before you pull the binary or the artefact, you need to make sure that you have all the relevant packages installed that can run that binary. So those prerequisites or those installations of packages can be done at the setup layer. The configure layer is the second. Now, this layer is basically run on instances of one of the following conditions: The first is when the instances enter or leave the online state.
The second is when you associate or dissociate the instance’s elastic IP. And the third is when you attach or detach the elastic load balancer from the layer. A cluster is one of the use cases I can tell you about. So let’s assume that you have a MongoDB cluster of three instances. Now you have one more instance created, and that instance would be part of the cluster because it is going slowly. So you made the decision to add a new instance. So typically, we already know that when a new instance gets booted up, the first layer that gets executed is the setup layer. So this setup layer might install the MongoDB packages and the required dependencies. When the setup layer is finished, the second layer is the configure layer.
Now, do remember that. So you have the setup layer and the configure layer. One important thing to remember is that the configure layer is executed on all of the instances in the stack, not just one. All right, this is very important to remember. So let’s say that the new instance is add it up. Now this configure layer will basically run on all three instances. Now, since this is a cluster and a new instance has come up, it is important for instance 1, instance 2, and instance 3 to know that instance 4 has come up. These instances can be added within the cluster, and you can configure all the replication and other aspects. And this is what the configure event is generally used for. A reverse proxy is another use case for configuring an event. Let’s say a new instance has come up, and you want to add the hostname of that instance inside the reverse proxy. Otherwise, the reverse proxy will not forward the traffic. So a configured event can be used in such use cases as well. Now, the third and fourth events are the deployment and unemployment.
The “deploy” event basically allows us to define when we want to deploy a new version of the application. We already saw that there was a deploy during the Node JS stack, and you had a Nodes underscore demo that was present but undeployed. Now, undeploy allows us to delete the application or run the undeploy command to remove the application from a set of application servers. Now, this can be run manually as well. Now, let me quickly show you how you can do that. So, if you basically go to deployments and click on “Deploy an app” over here, So this is the node JS application. So you have deployed and undeployed. So both of the options are present over here. Now, if you want to undeploy a specific app, you can do that directly from here.
And basically, what happens is that when you run this, it creates a lifecycle. So it already states that undeployed triggers an unemployed lifecycle event that runs the undeployed recipe to remove all the versions of the app from a specified instance. Same goes with the deploy command over here, which is used for deploying an app. Now the last lifecycle event has been shut down. Now this event is basically executed when we inform Ops to shut the instance down before the instance is terminated. So let’s say that instance is registered somewhere—let’s say it’s registered to a central satellite server or a spacewalk server . Now, before it gets terminated, what we want is for the instance to deregister itself from the central servers, so you can specify the recipes over here that will deregister this instance before it is terminated. So a shut-down event is also pretty useful.
112. ELB Sandwich
Hey everyone, and welcome back. In today’s video, we will be discussing the ELB sandwich. So I am very sure that you already know how specific this architecture is. However, in an exam, if you see that there is a word or a question related to ELB Sandwich, a lot of people get confused because they do not really know that this term applies to a specific architecture. So, let’s take a closer look at ELBSandwich architecture. So if you look into this type of architecture, you have an internet gateway, and below that, you have an ELB. Now this ELB is sending traffic to a set of web servers. Now, this set of web servers is then forwarding traffic to the second ELB, which in turn is sending the traffic to another set of servers. So this specific architecture So if you see this is the first ELB and this is the second ELB, there are sets of servers between them.
As a result, this architecture is known as the ELB Sandwich Architecture. This kind of architecture is now very useful in a lot of scenarios. So what you can do with this type of architecture is that you can have a certain set of web application firewalls here. So when you receive traffic from the external ELV, this is a set of web application firewall rules. easy to create instances in the Auto Scaling group, which will verify whether the incoming traffic has any security-related events or not. If not, then this set of web application servers will then forward the traffic to the internal ELB, and the internal ELB will then distribute the traffic among the set of web servers in the auto-scaling mode. So this is what the ELB sandwich architecture looks like. So in exams, there are a lot of cases where you might get questioned about ELV Sandwich architecture. So just understand where this type of architecture can be used.
I’ll give you one more example. In fact, in one of the organisations that I have been working with, what we had was an ELB Sandwich architecture, where you had an external ELB. Now, within this, within the first layer, we had a set of NGINX instances, which used to do a lot of filtering related to the web application firewall, related to static assets, and various others. Now, this set of NGINX instances would then forward the traffic to an internal ELB, and the internal ELB would in turn send the traffic forward to the application servers.
So you had these web servers in auto-scaling mode, and you had these application servers in auto-scaling mode. So, with the help of ELV Sandwich architecture, you can have a variety of architectural designs. So this is something that you need to remember. Ascertain that you comprehend what these ELV Sandwich architectures are. And basically, one important advantage allows ELBSandwich architecture pictures to be popular because it allows these instances to scale horizontally. And this is one of the important aspects of this type of architecture.
113. Stateful vs Stateless Firewalls
Hey everyone, and welcome back. In today’s video, we will be discussing stateful versus stateless firewalls. Now, typically, if you look into the basics of TCPIP communication, let’s assume that you have one server here and that SSH is running on the server on port 22. And along with that, you have the client over here. So the client wants to connect via SSH to this specific server. As a result, he will send a request, and the server IP address is 190 to 168 101.
As a result, he will send the request to 190 to 168-10-1 on port 22. Now, when it sends the request to the server, it will need to have a source port. So this 22 is basically the destination port because the traffic is going towards the destination server. However, along with that, it needs to create a source port. So in this case, we are assuming that the source port is 55607. Now, when the server receives the request and wants to reply back, it will reply back on the source port that the client had opened, which is 55607. So this is how the basic TCP/IP communication works in terms of the source IP and source code. Now, let me quickly show you this in Wireshark so it becomes easier for us to understand. So I’ll quickly launch Wireshark in Windows. Great. So this is the Wireshark console. And basically, since I’m connected to WiFi, you can see that there is a lot of traffic that is ongoing. So I’ll be capturing the WiFi interface.
I’ll double-click on that, and you’ll see there are a lot of packets that are getting captured over here. So let me take a random packet over here. So this is one random packet. Now, if you look at the IP protocol, it will basically have the source and the destination IP. So the source is 192, 106, 843135, and the destination is 542-39-3183.So if you just compare it here, So 192.168.0.1 is the source here, and the IP address, which was ranging from 52 to 53, is the destination over here. All right? So this is one part. So when you open up the TCP header, you see the source port and the destination port. So if my computer or if my browser wants to communicate with a website, it needs to have a source port.
The destination port in this case is 4, 4, 3, and the source port is 6 3378. All right? And this is something that we were discussing. So, instead of 63378, we have 55607. Now in the second case, we have the firewall, which is involved over here. And this firewall is a stateless one. So now what really happens is that when the client sends the request to the server, the firewall will first check whether the inbound 22 port is allowed or not. So if the inbound 22 port is allowed, then the request from my client will reach the server end. Now the server also needs to respond back, so it tries to respond back to this client’s IP, which is 172.10.15.seven, and on the client’s port, which is 55607. However, before the server can send the data back in an outbound fashion, it will also have to verify whether the port is open or not. Because if the fifth is allowed in the outbound, the server will be unable to communicate back even if the inbound request was received. So this is the characteristic of the stateless firewall.
Now, there are certain important points that we need to remember here. The first one is that the client typically initiates the request by choosing the FML port range. Now the ports from zero to 100:23 are also called the “well-known” or “reserved” ports. And this FML port range is something that the client decides. Now it depends on which operating system the client is on. So if you talk about the Linux kernel, it will typically choose the FML range from three to seven, six, eight, and 61,000 typically. Now, if the request origin is originating from ELB, it uses 102-426-5535, and Windows XP typically uses 10252 5000.So the port I’m referring to is the one that the client uses. As a result, this is the typical source port for the client initiating the connection. So now, in terms of a stateless firewall, the problem is that, let’s say that the client has chosen 55607, So within the stateless firewall, we have to ensure that the outbound port 55607 is allowed. If a second client is running Windows XP, it might not use port 55607 at all; instead, it might use port 100:25.
So again, in the stateless firewall, you have to make sure that outbound port 25 is allowed. So it really becomes very difficult because the stateless firewall does not really maintain any state. So you have to explicitly allow the ports to go outbound. So when you take the use case of stateful firewalls, stateful firewalls are quite intelligent. So let’s say that this is a stateful firewall. So now the client is sending the request towards the server, and the server’s IP is 190 on port 22.
Now the state will disable the firewall here before it allows this specific request to reach the server. It will check whether port 22 is allowed from the client’s IP address. If it is allowed, then it will automatically allow the reply from the server back to the client. It will not investigate the outbound. The reason is because it is obvious that if the client is sending the request to the server and if the server’s inbound is allowed, then obviously the server will need to reply back to the client. And this is the reason why the stateful firewall will not really look into the outbound connection.
It knows that the request originated from a specific client, so it will allow the outbound request. So the security group in AWS is of type stateful. So let’s look into how exactly this would really look like. So I am in my EC2 console, and I have one EC2 instance up and running. Now, this EC2 instance has a security group. Now, the security group is a stateful firewall. Now, if you look at the inbound and outbound traffic, it is allowing all the traffic. However, in outbound, it is not really allowing any traffic over here, all right? So let’s try to connect to this EC2 instance over here. So it has the IP from 20 two.
So I’m in my CLI, and you can see I’m logged in if we try to log in. The reason I’m logged in here is because of the stateful firewall. Now, does the stateful firewall remember that? So the request came from a specific IP, and it was allowed inbound. So if the request is allowed inbound and it was initiated by the client, then the server would typically need to reply back. And this is the reason why, irrespective of what you put in the outbound, the server will be able to respond back. So the session state is actually stored within the full firewall state. Now, let me quickly show you something interesting over here. So if I try to do a ping on Google.com, it will not be allowed over here, all right? Now, the reason why it is not allowed is, again, because of the firewall.
You see, there are no security group rules in Outbound. And, because this is a new request, thisping Google.com is a new request that originates on the server and is directed to a Google.com destination. This is why it is not permitted over here in the session state. It did not appear that this connection was established. This is a new connection, and this is the reason why outbound is not allowed. So let’s do one thing. Let’s put the outbound here. Let me put outbound as all traffic, and I’ll put all right, so outbound is allowed to be full zero zero. And now if I try to do a ping on Google.com, I am able to do it perfectly well. So this is what the tasteful firewall is all about. Now, we should also look into the stateless firewall, because otherwise, our understanding in terms of practicality will be incomplete. So in AWS, the network ACL is of type “stateless.” So let’s play around with that as well. So I’ll select the VPC where our EC2 instance is, and let’s go a bit down.
I have a network ACL. So this is the network ACL. Generally, in AWS, whenever you create a VPC, the network ACL is created by default. So this is the default network ACL over here, all right? Now this network ACL is of type “stateless.” So, even though the client sends the request to the server, it does not remember the state; if the outbound rule does not allow it, the traffic will not be able to reach the client. So let’s try this out. So, in the security group, I’ll make sure we have full permissions on both inbound and outbound traffic. So the security group will not come into play here. The only thing that will come into play here is the network ACL. So let’s do one thing: let’s edit the outbound room. So here, instead of “allow,” let me put “denied,” and I’ll click on “save” and let’s try to connect to the server yet again. So I’ll do SSS, which is easy to use, at the IP address of the server. And currently, as you can see, I have not been able to connect to the server. If you even try pinging, you will not be able to get a reply back.
Now, in this case, what is happening is with the data. If you talk about ping, the echo request is successfully reaching the server. However, the server is not being able to reply back because, within the outbound rule of the network ACL, we have everything marked as denied. However, in a security group, even if you do not have an outbound rule here, if you do not have an outbound rule in a security group generally, it is default deny. Now, even if you do not have an outbound rule here, the security group will still allow because it knows that the client initiated the request and that in the inbound case, traffic was allowed. So for that specific session, outbound will by default be allowed. All right, so now let’s remove this rule. I’ll just put it as “allowed yet again,” and now you see the traffic has been coming back. As we were discussing, one minor issue is that if you use network ACL, you cannot really specify port because different clients use different FML port ranges. So let’s say if you might use this port range in the outbound of the network ACL, and if someone from Windows Express tries to connect to your application, he’ll not be able to get the reply back.
As a result, network ACLs typically allow all outbound traffic to be allowed. So this is the high-level overview. I hope you found this practical useful. Now again, let’s revise again. So there are two main types of firewalls. You have a state-full firewall and a stateless firewall. A stateful firewall basically maintains the connection state and knows which packets to allow outbound, even when the outbound is restricted. On the other hand, a stateless firewall does not maintain the connection state for each packet traversing inbound or outbound in a new separate packet. Now, generally, whenever I take interviews, the first question is typically, if it is a security interview, because I have been working in the security domain, one of the first questions that I ask is “stateful versus stateless firewall.” And this is actually one of the more basic things that, if you are working in security or the network security domain, you should be knowing. And I have seen that a lot of other interviewers from organisations also have similar question sets for the interviews.
114. Overview of AWS VPN
Hey everyone, and welcome back. In today’s video, we will be discussing the site-to-site terminal. Now, a site-to-site VPN tunnel allows two networking domains to communicate securely with each other over an untrusted network like the Internet. Now, within the name itself, we have Site to Site. So basically, there are two sites that are available here. Now, this can be two different locations from which you want to securely communicate securely. So it can be between an EC2 instance and the data center. It can be between the two different VPCs. It can be between AWS and Azure or any other location.
Remember that a site-to-site terminal is also referred to as an “S 2 S” terminal. So in case you hear about “S2S,” it basically means site to site. Now, once the terminal is established, let’s assume that you have the EC to instance. So this acts as a VPN termination. And here you have the data center. So there is a VPN terminal that has been established over here. Now one of the challenges that an organisation might typically face with a site-to-site VPN is the high availability. So basically, if you see over here, there is a single terminal endpoint on each of the sites. So you have the EC to instance, which acts as a VPN termination point. And if this EC2 instance goes down, then the entire terminal would break. Site-to-site terminals were fairly common when AWS did not have an interregion VPC peering service. In fact, let’s assume that you wanted to establish a terminal between Singapore and Mumbai. VPC peering was not an option back then. Organizations used to rely heavily on Site To Site VP internals.
Furthermore, many organizations nowadays are based on hybrid cloud or on on-premises and AWS. Access to the VP internal site is therefore critical in this scenario. So we were discussing the availability challenges of the EC2 instance if you’re using it for the Site To site, and if that EC2 instance goes down, then your entire VPN connection would break. So in order to overcome that, what organizations typically do is establish multiple terminals. So far, you can see that you only have one tunnel; this is an active tunnel. And then you have one more terminal. This is a passive terminal. So if one tunnel goes down, then you can switch over to the passive terminal for high availability. So here’s an example diagram. So this is a terminal established between Mumbai and North Virginia. Again, you can do this via VPC peering as well. But let’s assume that this is AWS. Azure can be found on the right side.
Then you need to use a site-to-site VPN. Now, when it comes to the architecture of site-to-site VPN, there are certain key terminologies that you need to understand. The first one is the virtual private gateway, and the second is the customer gateway. The customer gateway is nothing but the VPN termination endpoint on the customer side. So this can be a firewall, this can be a server that acts as the IPsec VPN tunnel termination endpoint, et cetera. Now, on the AWS side, we make use of the Virtual Private Gateway. However, do remember that there is no mandatory need to have a virtual private gateway. A virtual private gateway has its own advantages. Like we were discussing that year, if the EC2 instance goes down, then the entire VPN tunnel that we have established over here will also break. So now, in the virtual private gateway, this virtual private gateway is highly available. So in order to understand this, let’s use the example of this specific diagram. So a virtual private gateway has built-in high availability for a VPN connection. So basically, what happens is that this virtual private gateway has two endpoint IP addresses that are each located in a different availability zone.
So you have the endpoint IP for one year. You have two years to use the end point IP address. So now what you do from your customer’s side is establish two VPN tunnels. EndpointIP One and EndpointIP Two would now have two VPN connections, which would work together to form a single VPN connection. Now, do remember that even though you have a virtual private gateway, if you implement this in your organization, specifically if you are having multiple virtual private gateways and multiple VPN connections, there are a lot of instances where one of the endpoints goes down and then you have to switch to Endpoint IP Two. Now, the great thing here is that the high availability is managed by AWS. So we do not really have to worry about this, but you will get into situations where you will see that one of the tunnels is down. However, if you have set up your VPN connection properly, you do not really have to worry because high availability will be taken care of. So this can be understood with the diagram over here. So this is one of the screenshots that I took from a different video. So here you see the VPN connection. So this is a site-to-site terminal, and this terminal has two IP addresses over here.
The first IP address is 18 two, one 6150, dot 193, and the second is 18 to 22 one, dot 76. So far, you can see that there is only one endpoint with the status “up,” and another with the status “down.” Ideally, if you are implementing it, make sure that both of them are up. That basically means that from your customer’s location, you have two VPN tunnels that are established. So this was just for the representation of the two IP addresses associated with the endpoints for the virtual private gateway. So this is the high-level overview of site-to-site terminals and how exactly a virtual private gateway helps in establishing a highly available site-to-site terminal, at least from the AWS perspective. Now, before we conclude, I just want to share that, although you have high availability over here on the right hand side, you still have one router over here, or it can be one server. And if this server or router fails, your tunnel will be broken once more. However, for example, one of the primary things that you need to remember is how you can achieve high availability, at least from the AWS side, which is achieved with the help of a virtual private cloud.
115. Overview of AWS Config
Hi everyone, and welcome back to the Knowledge Fold video series. So today we are going to talk about AWS configuration. Before we get into what an AWS configuration is, let’s look at a scenario that will help us understand why we need one in the first place. So one thing that is universal across most organisations is that infrastructure keeps changing. So if you have an enterprise, it might be possible that every week there will be a new application that might be coming.
And for that new application, you might have to create new ECs, for instance, new RDSs, new SQS queues, et cetera. So this is true for the majority of the organization. So let’s look at a very simple example where you have a new enterprise, which is a new AWS account. So in week one, you have a couple of EC2 instances where your website is running, and suddenly you find that there are a lot of users or a lot of heads that are coming to your website. So you increase the number of EC instances in week two. In addition, you included an elastic load balancer.
So this is something that you did in week two. Now that the traffic was increasing, you added a few more things in week three. So you added many EC2 instances, an elastic load balancer, a S3 bucket for possibly content delivery, and a relational database, or RDS, within your Amazon account. So, if your infrastructure is changing a lot every week, your CFO or CEO will come in and say, “Show me how the infrastructure looked a week ago.” So you can’t show him the cloud trail logs, which is especially important for auditors because it’s difficult to see what exactly changed from week one to week two to week three just by looking at the logs and manually drawing the diagrams to see what changed. This is one of the reasons AWS created the configuration service.
So the configuration service keeps track of inventory as well as inventory changes. So it will show you that on this date, this was an inventory, and on the next day, these were the changes that happened within your AWS account. So it becomes very easy for you to track the changes. So, let’s do one thing: go to the AWS account and check the configuration. So let me open the AWS configuration. Okay, so let’s click on “get started.” It’s very simple to configure. So by default, it is taking all the resources supported in this region. But we also want to include Im resources like global resources, which include Im. So now that it’s telling me the bucket name, I’ll specify it.
So let me just give it a sample bucket name. Essentially, a bucket will ensure that AWS configuration keeps configuration snapshots within the three AWS buckets. So let’s say after one year you want to see the backdated data. You could actually open the logs from the S-3 bucket a few months ago. So the next thing you have to configure is the SNS topic, and this is the role I’ll click on. Next, we’ll talk about the conferences in the upcoming lectures. But for the timing, let’s configure the AWS configuration. As a result, it is configuring AWS. In general, the AWS configuration may take some time to configure because, once configured, it will take all of the inventory from your AWS account. In my case, the AWS account is where I have almost nothing in this test account. So it loaded up pretty quickly. Now, one thing that is important to remember is that AWS Config does not support all the resources. It only supports specific resources related to Cloud Trail EC.
Two elastic load balancers, IAM RDS, and a few more So not all the resources are supported in this AWS configuration. The second important thing to remember is that AWSConfig is region-specific; it is tied to a specific region and is not global. So in my case, what I have is my infrastructure within the North Virginia region. So let me go to the North Virginia region so that we can actually see much better way.So I’m in the North Virginia region. Let’s do one thing. Let me select the security group over here, and let me go ahead and click on Lookup. Okay, so it’s showing me all of the security groups that are present in this specific region. You can also specify, for example, instance. And then you can go ahead and click on lookup, and it will show you all the data related to the EC2 instances and the security groups. So these are the EC2 instances, and these are the security groups that are available.
So let me open EC 2 as well. Okay, let’s do one thing. Now I have one security group called “OpenVPN Access server.” Let me click on the security group, and let’s take the security group ID. I’ll take the security group ID, let me unselect the instance, and let me lookup for this particular security group ID. So if there is a column called “configuration timeline,” let’s click here. So what this configuration timeline will do is show you any related changes that were made to this particular security group. Now, since we enabled the config a few minutes ago, it will not show you any configuration changes. But if you come down here, there are two very important fields to remember. First are the relationships. So relationship means that this security group is associated with which instances or resources.
So if I click over here, it says that this security group is connected to this network interface, it is attached to this EC2 instance, and it is part of this particular VPC. It is a very important thing to remember. The second critical area is changes. So within this, it says “configuration changes,” where if you modify some aspect of the security group, it will show which aspects were changed. So let me give you a practical example that will make things much clearer.
So let me add a few rules over here. Let me add 91, 116, and 75 at 00:16, and delete port 22, and then click save. So we change some aspects of this security group. So this will be reflected in the AWS Config console. So let me do one more thing; let me attach that security group to another instance as well. I’ll change the security group, so any changes you make will not be instantaneous; they will take a few minutes to be reflected in the AWS configuration. So let me add a security group to one more instance over here. Okay, so what we did was change the security group, which is the OpenVPN access server, and also attach the security group to a different instance.
So these changes should be reflected back to the AWS Config console. So again, it might take some time—I would say a few minutes—before it is reflected over here. So let’s pause this video, and I’ll be back in a few minutes. Okay, so it has been around five minutes, so let me just refresh this page and let’s see if the configuration has come up. Okay, so if you see over here, it is showing me that there have been some changes that have been made to this particular security group. So if you see there is a difference in the time, So let’s look at what has changed. So let’s go to the “Changes” section over here, and basically it is showing me what the changes are that have been made to this particular security group.
So it shows me the specifics of what has changed and what has been removed. Now along with that, if you remember, we had also attached this particular security group to a new EC2 instance. As a result, this will be classified as relationship data. So it shows that this particular security group has been attached to one more network interface. So if you remember, the security group is attached to the network interface. So this has been shown in the relationship status. So this is the EC-to-network interface. We now have two; previously, there was only one, and within the changes section, you can see the security group-related changes as well as the network interface to which it was connected. Now if you’re wondering from where it is getting the data, then it is actually getting the data from our old friend, which is the cloud trail. So if you remember all the API-related activity, anything that you do within your AWS account gets logged via the cloud trail.
So the configuration will pull cloud trade-related data and then interpret it into a more usable format for us to examine. So it also shows you the timing within which the events were changed for.It is a very important thing to remember that configuration is something that is very important as far as the enterprise or even medium-scale organisations are concerned. So coming back to the PowerPoint presentation, I hope you understood the basics of why AWS configuration is required. Let’s take a look at some of the use cases that might be useful in situations where AWS configuration is enabled. As an example, suppose your infrastructure costs have suddenly increased and your chief financial officer (CFO) wants to know what has changed in the last three weeks.
Instead of displaying logs and other information, you can simply open up the configuration and show him what changes have been made, and he may be impressed as well. And the second use case is that let’s say you are in DevOps at XYZ organization, and last night everything was working fine, but suddenly in the morning users are reporting that they are not able to access the website. As you may be aware, there was a change relating to the EC to instance or the security group. In this case, you can use the configuration to determine what exactly changed between last night and this morning. So these are a few use cases for the AWS configuration; for now, there are a lot more features that the AWS configuration does provide that are really amazing, and we’ll discuss some of these features in the next video.