Amazon AWS Certified Security Specialty SCS-C01 Topic: Domain 3 – Infrastructure Security Part 4
December 19, 2022

16. Egress Rules – The Real Challenge

Hi everyone, and welcome back to the Knowledge Poodle video series. And in today’s lecture, we’ll understand more about the outbound rules in Firewall. Hopefully, a post-fix is in place. Okay, it is not running. Let me just restart the postfix service. Okay, it has started, and let me try sending the email. Okay, as you can see, I received a temporary email. So this is the email that we sent from our machine. Now, this is one of the very simple examples. In practice, what happens is that there is a script that sends thousands of such emails to random email addresses with some kind of phishing link, with the expectation that even if someone opens that link, there is a chance of many kinds of attacks like phishing. So how can this be prevented? This can be easily prevented if there is some kind of outbound rule here. So, instead of having one to seven or zero to 132, this only applies to outbound to local. Let me just refresh this, and now if I just try to send email again, let me just say proper security group rules. This is a perfect firewall outbound rule.

Okay. Now let’s see if we have received this email. This is the content of the two emails that we actually sent. Okay? So, if you see proper fez rules, this is the subject name, and if you just try here, you will not get, and the reason for this is that the outbound rule is restricted. Now, the reason why we got two emails over here is because we actually sent them twice. So this is the first time, and this is the second time. Now, one of the problems with this kind of implementation is that this is actually one of the perfect outbound rules that you can have. But what will happen here is that if you try to install packages, let me try to install telnet. If I do a Y, you’ll see that it will actually not be able to download because the outbound is actually restricted. So in order to allow packages to be downloaded, or if there is any software that needs to be updated, you need to allow some kind of outbound rule.

Typically, in enterprise organizations, 80 to zero, zero, zero is allowed. Okay? Now, if I go ahead and try to download telnet, okay, let me just type 4740, okay, it will work if I try to download telnet now. And the reason it will work is because port 80 is allowed. So you can try to have this kind of configuration specifically for outbound. Now, let me show you one interesting thing. Let me just verify if the NGINX package is installed. It is not. So let me just download the engineering over here, okay? And let’s start engineering. I say service engineering starts. Let’s do a net stat to just verify if it is running, and it is running. Let’s do one thing inbound: allow meals on to access port 80 from my IP addresses. Okay? Now let me try to open this particular website. And now you will see that I actually received the response from NGINX. So, in case you’re wondering what I’m up to, let me just remove the outbound rules, okay?

So I only have one outbound rule. And let me just refresh this, and you’ll see it will work. Let me try to open it in a new tab, and it will work. Now, the question here is, “How come it is working?” I mean, I’m sending a request to the NGINX server running on this particular instance, but I’m still getting the response. How come? because the outbound is actually restricted. So logically, I should not get any response because outbound is restricted. So how am I getting this particular response? And the answer to this is the tasteful nature of firewalls. And this is one of the very important things to understand about firewalls. Even in interviews, one of the most common questions is about the firewall’s tasteful and stateless nature. So what we’ll do is understand the two natures of firewalls in the next lecture and hope the basic concept of having an outbound firewall rule has been understood. So this is the basic idea behind the security group outbound rule. I hope this has been informative for you. And I’m sure that from now on you’ll configure the outbound rule for your websites. Thanks for watching.

17. IPTABLES & Instance Metadata

Hi everyone, and welcome back to the Knowledge Portal video series. The instance metadata will be the topic for today. We’ve already talked a lot about EC2 in previous lectures, and now that we’re familiar with it, we’ll go into much more detail about the instance metadata. So let’s explore that. Now, I have one instance running on the public sublet, and it has the IAM role call associated with it. So if I click on this Im role, let’s look at what the policies are that are attached to this role.

And it has two policies. One is Amazon’s three-read-only format, and the second is Kms. So please allow me to proceed to the instance. So this is the case. So since it has Amazon’s three read-only policies, I should be able to list all of the buckets for my particular account. So let’s try that. So I’ll do AWS, three LS, and all the buckets are listed here. Now, if someone asks you, like maybe in an interview, they might ask you how exactly this works because you have not configured any access or secret keys. One thing to keep in mind is that everything in Amazon works like an API.

So you need an access key as well as the secret key to connect to any of the services. Because you are not configuring the access and secret key here, an IAM role is attached, and that role is responsible for providing the access and secret key to this specific instance, as we can see from the instance metadata. So let’s go down here a little bit. And if you’ll see over here, they have already given one command to access the metadata related to EC. Instantly, I’ll just copy that and paste it over here. It’s basically curl: http://106.925.416.9254 So this is something that you should always keep in mind when it comes to exams and interviews. So if I run this particular command, you can see that it is giving me various information like AMI ID, IAM role, Mac, et cetera. So if I do an AMI ID over here, it will return the AMIID associated with this specific instance. So it also has an instance type. Let me try the instance type over here.

So if I do an instance type, it says “two micro,” and let’s try instance ID, it ends with “nine A.” So now, if you just want to verify if it is really correct, let me go back to my EC instance. So if you see the instance ID is nineA and the instance type is tw0-micro, So metadata gives you information about a specific instance. We’ve already established that this instance is associated with an impersonation role. So if you want to know what the Im role is, Let’s say you don’t have access to the console. To console, you only have access to the instance. And now you want to see what the name of the IAM role is that is associated with it. So what you can do is go to the IAM section over here. So this is the IAM. So I’ll go to im and info, and this will give me the ARN for the IAM role. So if you see the Im role name here, which is named “public,” to verify again, if I just go down, the name of the role is “public.” As a result, I can obtain a great deal of information from the EC to instance metadata.

Now we already discussed that the IAM role that is connected to the instance has access and a secret key, and that access and a secret key can be used by the AWS SDK that is configured within the instance. So if you want to see what the access and secret key are that the Im row has provided to this particular instance at this time, you can query that with the instance metadata as well. So if I go to my security credentials, okay, let me just verify the correct part. So IAM security credentials are followed by the role name. So I’ll say, “Let me just clear the screen.” Okay. So it’s IAM security credentials, followed by the role name. Now we know that the role name is “public,” so I’ll just say “public” over here. So what this should essentially do is give me the access and secret key that this particular role has provided. And now, if you see over here, this is the accesskey ID, this is the secret access key, and this is the token associated, and it also has an expiration timer. This means that the role associated with the instance is responsible for providing access and the secret key behind the scenes. So whenever I do an AWS S3 behind the scenes, this access key, this secret key, and the token are being used as an authenticator to access the S3 in Amazon.

So this is a very important thing to remember. Now, one question that comes specifically from the security side is whether any user who is connected to this particular instance has this particular access and secret key. So let’s create a sample user. If we create a sample user, say “test,” I guess there is some kind of lag that I’m facing today. So bear with me over here. So let me do an SEO hyphen test. Okay. So I should be logged in as the test user now. Great. So for this test user, if I try to query the same metadata, let me check if it really works. So I did a query on this particular metadata from the test user, and now you see that even the test user can get access as well as the secret key and the token. So it is very important to verify who has access to the instance because any user who has access to an EC2 instance can actually get the access key, the secret key, and the token, and he can use that for email purposes as well. Now, one thing that is really good is that there is an expiration timer that AWS provides. But until these keys expire, any user can have or assume the role of this particular instance, even from the local environment.

So the question is, as a security site, how can you prevent all users except Root from accessing this data? And there is a very simple solution for this: Iptables. So let me just show you this before we go ahead and understand what this basically means: Let me clear the screen, okay? I’ll paste this particular IP table rule. So what this basically says is that anytime there is an output connection made to this particular IP address by anyone except Root, then it should be dropped. If you’re not familiar with IP tables, I’d call iptables a command-line firewall. So now what will happen is that if anyone except Root tries to initiate a connection to this particular IP address, that connection will actually be blocked. So let’s verify if this rule has been followed. So now if you look into the output section, you see that there is a drop rule associated with this particular IP for anyone other than the root account. So if I try to switch to the test account that we tried earlier and then try to run the same command that we tried, let’s see if it works now or not. Okay?

So now if I’m trying to make a scheduled request, you can see nothing is happening. And this is because the IP table is actually blocking this. So anyone cannot access the metadata; only the root account should be able to access the metadata. Now, one important thing to remember is that if there is an application running on this particular server, the application will be using the IAM role that you need to allow the application, or I would say the user from which the application is running, to access this particular metadata. But except for that application user, everyone should be blocked based on IP tables. very important thing to remember. Well, the more important thing is that you should not grant access if it is not required. So that is the first point, and this is the second point. So this is the basic information about the instance metadata and how we can use the IP tables to block access to the instance metadata IP address. So I hope this has been useful for you, and I hope it gives you a much clearer idea of how the Im role works. So I hope this has been useful for you. And I’d like to thank you for viewing.

18. IDS / IPS in AWS

Hey everyone, and welcome back. In today’s video, we’ll go over the IDs IPS solution and how it can be used in cloud environments. So, let’s understand the difference between a firewall and an ID/IPS system with a simple analogy. Now, let’s say that you have security here, and this is the gate.

So let’s say this is the firewall. Now, the firewall will basically prevent the people who might be going through a different entry, let’s say through some backside doors, et cetera, from going inside. But if people are coming through this official door, entry would be given. So this is very similar. Assume port 80 is open in a firewall, and traffic will pass through. However, if port 22 is closed, then the firewall will block the traffic from coming in. However, the problem is that the firewall does not inspect the data that is going through the gate. So now, typically, if you go into airports, you have this type of scanner where your luggage is scanned. So what you have inside the luggage has been completely scanned. It is possible that you are carrying weapons or other unauthorized items that are not permitted inside. So this is the responsibility of Ideas and IPS. So let’s say you are carrying some kind of exploit inside your packets, or you are carrying some kind of thing like a SQL injection.

So those kinds of malicious packets are something that IDs and IPS can block. If you understand the TCPIP-based packet firewall, you will notice that it typically stays in the source address, destination address, source code, and destination port. So these are the four major components based on which the firewall typically operates. However, you also have the TCP data. So an attacker can send a malicious, exploitable packet back to the server, due to which it can be compromised. So IDs in IPS typically look into this TCP data. Now let me quickly show you this in Wireshark. So this is my wireshark. So let’s click on one of the packets. Now, if you look at the IP packet over here, the IP packet basically contains the source and the destination. And you have the TCP packet, which basically contains the source port and the destination port. So this is where the firewall typically operates, but you also have that data.

Now, data is very important here because if you do not scan the data or do not block the data if it is malicious, the chances of your server being exploited are much higher. So it is very important that you also look into the data part. Now, in this type of packet, this is encrypted data. And typically, if you are deploying an IDs IPS solution, you should also deploy your private key so that IDs IPS can decrypt your encrypted data. Now, the deployment of Ideas IPS-based architecture in cloud environments is a little different, like when you speak about AWS. You do not really have full access to the network, so you cannot create an APN port or something similar. So the Ideas IPS architecture can be deployed in this way: you have your Ideas IPS appliance, say in a public subnet, and all the EC, for example, have the IDs and IPS agent installed. As a result, the agent would communicate with the application IDs, and the rules would be applied accordingly. All right? So remember this architecture because when you go ahead and design the Ideas IPS solution, you will have to approach it in this specific way. There are other ways, but they are not really recommended as they will bring your servers down. So this is the most ideal way.

When you search the AWS Marketplace for Ideas IPS, you will find a variety of solutions. My preference is deep security. This is quite good. Like, I have been using this for the past three or four years in an environment that has multiple compliances—more than five or six—ranging from PCI DSS to ISO, and different country-level compliances as well. So Trend Micro’s deep security is a pretty interesting one, and it really works well. So anyway, this is something that I would suggest as a recommendation. So, a few things to remember before we conclude this video: IPS is basically an intrusion detection system and an intrusion prevention system. As a result, the detection system can only detect; it cannot block the intrusion prevention system’s packets. If it sees that there is malicious data in the packet, it will go ahead and block the packet.Now, in the AWS environment, you have to install the IDs IPS agent in your EC2 instance, which will communicate with the central Ideas IPS appliance for the rule configuration.

19. EBS Architecture & Secure Data Wiping

Hi everyone, and welcome back to the Knowledge Portal video series. So today we are going to talk about the EBS architecture. Now, understanding EBS architecture is very important because it will help you understand how the virtualized infrastructure really works. And once you know about the EBS architecture, you can also design the security controls within your organisation accordingly. So let’s understand the basics of the EBS architecture. So, in traditional servers, all of the components were contained within the same hardware. So if this is one server, the CPU, the RAM, the hard disc drive, as well as the motherboard, were all integrated within the hardware itself. So the question is, what if the motherboard goes down? Or what happens if there is some kind of short circuit within the motherboard and it stops working? So, traditionally, what will happen if the entire stack goes down, including your hard disc drive? So during that time, you would have to manually plug out the hard disc drive and attach it to different hardware, which is really problematic.

So just take the very simple example of your laptop. Now, what happens if the motherboard goes down in your laptop? The entire laptop stops working. So now, if you want the data that is on your hard drive, you need to actually open the laptop, remove the hard drive, and plug it somewhere else, which takes a lot of time. And the same goes with the traditional servers. As a result, a network attached storage environment was created. So what used to happen in network-attached storage was that the CPU, the RAM, and everything else were within a single piece of hardware. However, the hard disc drive was central. So, whether data is stored or not, the hard drive of all of these computers will be stored in the permanent network-attached storage server. So what happened was that the Nas used to mount the data in the individual laptop or computer using various protocols such as Icazi. And once the computer has done this work, we unmount the hard drive, and it goes back here. So this is a much better approach than the earlier one because let’s assume that this particular computer goes down.

Now we don’t really have to worry because the entire database is stored on the central server. So even if this goes down, we can actually mount the data from this central server to the second computer over here, and we will be up and running in very little time. And this is one of the reasons why network-attached storage is one of the de facto standards in most of the virtualization infrastructure, which includes AWS. So yes, AWS does work on network-attached storage. So, in terms of EBS architecture, EBS is entirely based on network-attached storage. So by that, I mean the EC-2 instance resides in a different environment, and the EBS instances are in a different environment. As a result, the EC instance is in one environment and the EBS instance is in another. And both of these environments are linked by a network. So there is a network between an EC2 instance and the EBS. So you see, there is a network over here. So there are land cables over here. So now, what happens if you want to put some data on an EBS volume?

So the data actually travels over a network using a network-attached storage protocol like Ice Kazi. And then it gets stored in whatever EBS volume is assigned to you. Now, there is one problem over here, and that problem is that the data is travelling in plain text over the network. So what happens is that if there is very sensitive information, such as a customer’s credit card information, and you really don’t want the information to travel over the network in plain text, So you’ll need to encrypt any data that travels across the network as well as any data that is stored within your EBS volume. And this is one of the reasons why, if you really want the data to be encrypted both in transit over the network and at rest, AWS recommends that you go ahead with EBS encryption. So, talking about EBS encryption, let me show you. So, whenever you create a volume, you will notice that there is only one encryption option.

So, if I select this option, any data that travels both across the network and is stored here will be encrypted. And this is something you should do if you have very sensitive data. So this is one key aspect. Now, the second important point to remember is what happens when you delete a volume or when you terminate an instance. So again, if you terminate an instance, the SSD is always there, right? All that happens is that the connectivity between your EC2 instances and the SSD goes away. And since this SSD is not being used by you, it can be assigned to a different customer. So what happens? Let’s assume that you have very sensitive data. You used to store very sensitive data on this particular SSD, but now you don’t really need it. And basically, Amazon will not scrap the SSD. After you have done using it, they will assign it to a different customer. So there are two ways in which you can make sure that the data that you store on SSD will not be retrieved by anyone other people.

So there are various ways to nullify the data over here. The first way is that before you terminate the instance or delete the volume, you have the option to manually override the data in EBS. So this is one option. The second option is to assume that you don’t override the data. All the data is still there when you delete the volume. So in this case, AWS will also wipe the data immediately before that particular EBS is made available for reuse. So, if you are using this EBS and delete this volume before it is assigned to a different customer, AWS will wipe all of the data that is present in this EBS volume. Now again, there are two options over here. If you’re wondering how a customer can upload data, it’s one of the simplest steps you can take. Let me just show you. So I have a machine that is simple to insert; I have a centralised machine. So generally, what happens specifically in my organisation is that if a system administrator is leaving the organization, he has to hand over the laptop, which he directly does not hand over the laptop. He wipes the laptop with a large amount of random data before handing it over, ensuring that his files cannot be recovered. Assume that even if you delete the file, it can still be recovered with proper forensics.

So you need to make sure that file cannot be retrieved at any cost. So, as far as Linux is concerned, let me give you a simple example. So let me create a sample file called touch TXT. So you see touch TXT there; let me open that, and let’s put some, say, credit card information, say 48004K. Just assume that this is a credit card information wire. Now it is time for you to return your laptop. So one option is that you do a RMFtest TXT, and the data will be gone. But do remember, if a foreign backup is done on this particular hard disk, it is possible for the person to retrieve this particular data, which is actually dangerous. So, in general, you can do a DD ifdev zero output file, which is a PWD. So the output file will be the test TXT and will be one MB. Okay, let me do a count of, say, ten. Okay, so random data was copied to this specific test file, and now if I do a less on test TXT, you’ll see that it’s being overwritten by all the random data. This is what “customer can wipe the data in EBS volume” means.

So here, we just wipe the data of a particular test file. But you can also use the same procedure to wipe the entire disc with random characters. So this is the first point. The second point is that AWS can also wipe the data immediately before EBS is made for reuse. Now, the reason I marked it as read is that this is an extremely important point to remember, both in terms of audit and exam. So do remember this particular point. And the third important point is that when a storage device, such as the Hades drive, reaches the end of its useful life, it is no longer considered reusable. They are decommissioned via the detailed steps mentioned in NISD 888 and DoD. So, when you want to throw away the hard disc drive, you have to make sure that the hard disk and the data on it cannot be retrieved at any cost. So there are some standards like NISD or DoD which exactly tells you the way of nullifying the data. As a result, the difficult dishes are discarded. So, you see, there are photos that look like hard discs that were actually physically scraped into small pieces, but they cannot really be, and the data cannot really be retrieved.

So, if you’re wondering what the NIIST standard is, this is the guideline for media sanitization, which actually contains the detailed steps on how you can sanitise the media that you want to scrap. Now, AWS follows this particular standard to decommission any hard disc drive. So these are the three important points to remember. One more important thing to remember is that you have to just study the overview of security process white paper because this has a lot of details related to various things, including the decommissioning of hard disc drives. So, if you see storage device decommissioning, it actually tells you how AWS decommissions the EPS volume. Now, this white paper has lot of details. Fig. T,. fig. And it will also definitely help you in the exam. So, again, this is it for this particular lecture. Now, an important point for this exam is that you should know how the data gets scrubbed. So, when two EC2 instances or EBS volumes are linked, when is the data scrubbed? So, this might be a potential question. And there can be four options, like just writing after the EBS volume is deleted, after it’s been assigned for reuse a second time, having data wiped immediately before reuse, or not wiping at all. It’s the customer’s responsibility. So, I will leave the answer up to you. I hope you understood the basics about the EBS architecture and EBS wiping. And I would like to thank you for viewing.

20. Introduction to Reverse Proxies

Hey everyone, and welcome to the Knowledge Portal video series. And today we’ll be speaking about one of the most amazing features of NGINX, which is reverse proxy. So let’s go ahead and understand what a reverse proxy is all about. In simple terms, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. So let’s understand how this would really work. So, you have a typical internal network where you have an engineered server setting over here, and in the back end, you have an application server. Now, whenever a client wants to access this application server, what it needs to do is first send the request to the NGINX server. An NGINX server internally will forward the request to the application server. Similarly, the application server will send the response to the NGINX server, and the NGINX server will forward the request back to the client. Now, this type of architecture is very useful because there is an engineered server that sits between the client and the application server.

So your application server is not directly exposed to the client. Now that you have your NGINX reverse proxy, this is called a “reverse proxy” because it is actually taking the request from the client, sending it to the back-end server, taking the response, and sending it back to the client. Now, since it is sitting in between, it can actually do a lot of things. For example, it can have some kind of DDoS protection. So if a client is trying to attack with a lot of packets, it can be blocked at the NGINX level itself. It can also do other things like web application firewall, caching, and comparison. So there are many possibilities when you have NGINX, which is setting itself behind, and as a result, it is known as a reverse proxy. Now, reverse proxy can play a key role in achieving a lot of use cases. Now, one of the very important use cases is that it hides the existence of the original back-end server. So this specific client does not really know how many servers are actually behind the scenes or what the configuration is. So you have NGINX running in the background, and the client only needs to be aware of this engineering server. And if we apply all the hardening over here, this would actually protect our entire website. So the second important point is that it can protect the back-end servers from web application-based attacks like SQL injection.

If the NGINX reverse proxy includes a web application firewall, it can also protect against denial of service attacks and other threats. A third important point is that a reverse proxy can provide great caching functionality. So what do I mean by that? Let’s assume a client requests an index HTML from the application server, and the server gives it back to the client. Now, after a few seconds, one more client is requesting the same file. So instead of NGINX asking the application server again and again, what it will do is store the file within the NGINX server itself. And whenever a client asks for that file, the NGINX server will directly send it to the client without contacting the application server directly. So this provides much faster speed and response time. So that is what is meant by the great caching functionality. NGINX can also optimise the content by compressing it. so it can compress based on gzip. We’ll be looking into it. It can act as an SSL termination proxy. You can also do things like request routing and many other things. So, everything we’ve been talking about here will be put into practise in the following lectures. So let’s do one thing. This has become theoretical. We’ll have an encyclopaedia server as well as one of the back-end servers. So we’ll do a practical and see how it would really work. So let me show you. This is ours, and I’ll show you. So this is our Engine X reverse proxy, which is running on the backend server.

 As a result, there is one more server, which is back and server. You see, the IP address is different. This is 17 three, and you have 17 two. So within the back-end server, I have an index.html file that says this is a back-end server. We’ll send requests to our NGINX reverse proxy and examine the response that we actually get. Perfect. Until then, I’ll open the NGINXaccess log to make things much clearer for us, both of them. F log. NGINX access log. Perfect. So currently the access logs of both of these instances are empty, which means no new requests are coming up. So when I type the example this time, let’s see what happens. It says this is a back-end server. So what really happened was that I sent an example. The comdomain domain is linked to the NGINX server. So I’ll give you an example. The request went to the NGINX reverse proxy. It forwarded the request to the back-end server. It got the response from the back-end server, and it served the response back to the client. Now, if you just want to verify that, let’s see. So this is the reverse proxy. The reverse proxy received the request from 172-170. So this is the request. And if you go here, this is the back-end server. That request was now forwarded to the back end server via IP 170, 2170 dot 2. As a result, the client has an IP address of 172.170.0.

So this is zero one, this is zero two, and this is zero three. As a result, the reverse proxy will receive requests from 0 and 2, while the application server will receive requests from 0 and 2. And this is what is actually happening. So the reverse proxy is receiving requests from “zero one,” which is my browser, and the back end server is receiving requests from “zero two,” which is my NGINX reverse proxy. So that’s the fundamentals of reverse proxy. Now, since NGINX is sitting between the client and the back-end server, we can establish a lot of things like a web application firewall or all the hardening-related parameters within the reverse proxy. So this is it. In this lecture, I hope you learned the basics of what a reverse proxy is. And this is what our intention was for this lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!