28. S3 Event Notification
Hey everyone, and welcome back. In today’s video, we will be discussing the S-3 event notification. Now, the S Three notification is a pretty cool feature that allows us to get notified when a specific event or activity occurs within your bucket. So let’s say you have an S3 bucket, and if someone uploads a file, someone deletes a file, or there is any other kind of activation, that happens.
And if you would like to receive a notification, maybe via SMS email, you can do that. So this is the only thing that I have included in this slide so that we can directly jump into the demo and explore this nice feature. So this is my third console. So before we start, let me just create a bucket. I’ll say “Escape S Three Event Notification,” and I can go ahead and create a bucket. Once created, you can just quickly search for it. So this is our bucket. Now, if you go inside the properties and if you go a bit down, there is a tab for events, and within this there is an option for adding a notification. So if we click over here, there is an option for various kinds of events, like “put post,” “copy,” “multipart all object,” “create events,” “object in RRS laws,” et cetera. So you can create an event in such a way that if any of these things happen, you would typically like to receive a notification. Now, in order to receive a notification, the S3 event notification basically uses SNS to do that.
So let’s do one thing. Let’s quickly go to SNS. I’ll move on to topics. I already have two topics created for our other demos. But for the sake of a complete independent video, I’ll go ahead and create a new topic here. Let me name it “KP Labs have an Event Hyphen notification,” and I can go ahead and create a topic. So once you have done that, you can go ahead and click here and you can create a subscription, the type of which depends on where you want to receive alerts. I’ll just put an email address here and just put my email address. Once you create a subscription, you should receive an email for confirmation. Great. So in my case, I have received an email for confirmation. You can verify that this is the SNSARN. You can go ahead and click on “Confirm Subscription.” Once the subscription has been confirmed, the subscription ID should change from “Pending Subscription” to “ARN.” Great. To be able to push messages into your SNS topic from the S3 bucket, you must first add a Topic policy. So let’s go back to the topics. Here, I’ll select our SNS topic.
Go to Actions, and you can click on Edit Topic Policy. Now, within the advanced view, this is the place where you need to specify the policy. Now, before we do that, Let’s do one thing. I’ll just copy this ARN because this is something that will be required. And if you look at the topic policy, this is the sample topic policy that I have here. You need to put your SNS ARN here. I’ll be posting this after the video, so you can try it out at your end as well. So I’ll copy the ARN over here, and the next thing that you need to do is change the bucket name to the bucket that is going to send a notification. So it’s the Kplabs’ three-event hyphen notification in my case. You can just copy it from the URL bar. Once you’ve done that, you can copy this and paste it into the advanced view before clicking Update Policy. Perfect. So the topic of policy has now been created. So let’s go back to sentence three.
And we’ll make a notification for a new put, which we’ll call a “three notification.” We’ll disregard the prefix and the suffix sent to. There are three options. SNS topic: sqsql lambda function You can select a SNS topic over here and just select the SNS topic that we have created, which is kplash’s Three Event Notification. Once you have done that, click on “Save.” Now, if your topic policy is not correct, you will receive an error, and it will not allow you to save. In our case, it did because our topic policy was correct. Great. So it’s time to test it out. Let’s do one thing. Let’s upload a file to this SD bucket. So I have uploaded a file called “nested TXT.” This was used for our cloud formation video. Let’s go ahead and click on “Upload.” Now, once I’ve done that, within the email I should have received So this is the email address. Now let’s copy the JSON to our JSON formatters so that it becomes easier for us to read.
So here you can see that the AWS region is AP Southeast 1, and the event sources are 3. The event is called “object created put.” So we had created a new object, and this is showing the source IP address. And this is the S-three notification. It also displays the object key, which is a nested TXT file that was uploaded. So this is a pretty nice feature that AWS has released. Now again, there can be various use cases for this. I’ll show you one of the use cases that we used to implement in one of the organisations that I was working with. So what used to happen there is that we had a front-facing web application through which customers used to upload their files into the S3 bucket. Now that files used to be pulled from the EC in two instances and certain work was performed on that specific file, So what we did was create a solution in which, as soon as a user uploads the file to S3, our lambda function will pull that file and scan it for any viruses or malware through certain sets of the Yara framework.
If the file was infected, it would notify the administrator; otherwise, it would do nothing. So that was pretty much possible with the help of S3 notifications because you never know when a customer will upload a file. It might happen at 2:00 a.m. at night. So whenever a customer used to upload a file, there used to be an S3 notification that got generated that was called the lambda function. Lambda function had pulled the file, scanned it, and if everything was okay, then there was no notification. If malware or a virus is detected, it will send an email to the administrators. So that is one of the pretty good use cases that S3 event notification enables.
29. VPC Flow Logs
Hi everyone, and welcome back to the Knowledge Portal video series. So today we are going to talk about a very important topic called “flow logs.” So basically, what flow logging allows us to do is allow us to see what type of traffic is coming to our particular interface.
So let me give you a very simple example. So on this slide, we have already looked into the early video lectures where this is a security group and it is allowing port 20 to be accessed on this particular IP. So this is a genuine user. When he tries to SSH on this server, he’ll be allowed. However, there can be a lot of hackers as well who will also try to use SSH. As there is no security group that allows this specific IP, the security group over here will block or deny this user’s access. Now as a security engineer, we should be knowing on what kind of packets are getting blocked at the security group level so that we can have a better understanding on from where the malicious traffic is coming from.
And one of the amazing features that AWS provides is that it allows us to see exactly what is coming over here, what is getting accepted, and what is getting blocked. So this we already looked at, and I just wanted to add the slide that the security group is always associated at the network interface level. So, basically, the flow log works at the network interface level and allows us to see what type of traffic is coming in and whether it is accepted or rejected by the security group. So let’s go to our favourite AWS console, and here we have the EC2 instance running on a public subnet with a public IP. So if we just click on this particular interface, you will see that there is a flow lock section over here. So what this flow lock allows us to do is monitor the traffic on this particular interface. Now one important thing to remember over here is that, as there are a lot of interfaces that you see over here, AWS allows us to enable flow logs at the global level. So, if you have 100 servers, you can enable the flow log in each interface, or you can go directly to your VPC and enable the flow log over here. So in my case, I already have a flow log enabled. Let me just delete this particular flow log. So I used it for testing. So what I’ll do is create a new flowlog, and the first thing that will be required over here is that you need to set up the permission. So I’ll click here, and basically you need to create a new IMS role. So Amazon has already filled out the document policy. So what this basically does is allow the VPC to create a log group and put the events inside a particular log group.
So if I just click on “allow over here,” Okay, let me go back to my VPC. Let me just try again. So I’ll create flow logs, rules, and a destination group name. Let me type “Kplabsay flow logs.” Okay, I’ll create a flow log. And now you see the status is “active.” It is also showing the CloudWatch log group as “Kplabsflowlogs.” So let’s do one thing. Let’s go back to the EC, for instance. So we have to generate some kind of a logging. So I’ll go to the EC2 instance, I’ll copy the public IP, and let’s just verify the security group. As a result, the security group only allows on port 80. So let’s do one thing. Let’s try to generate some traffic, which we know will be blocked. For example, ICMP traffic If I simply paste ICMP, you will notice that it will not reach because the security group does not permit it. Let’s generate one more packet, say telnet on port 22. So we know that the 22 is not allowed.
So it will not work. Let’s do one thing. Let’s try telnet on 3306. This will be denied once more. So, these are all the requests that will be rejected at the security group level. Now, all of these entries will be present in the newlog group of the flow logs that we just created. So, to simply verify, open up the Cloud Watch. Let’s go to the logs. And generally, if you will see, there is no log entry that is created. So generally, what happens is that the first time you create this log group in Cloud Watch, it will take around four to five minutes to populate the data. So, if you look over here, you’ll notice that this log group has already been created. But here, the data is not yet populated. Let me just try to open this. Let’s see if it works now. Okay, there is some error, so ideally it comes up in a minute or two. So let’s try and wait for a few more seconds. One thing that is really good here is that once you start to capture the flow logs, you will see an amazing chemistry between your servers and the hackers. You’ll actually see insights into what’s happening or what hackers are attempting to do, which are very interesting things. Let me try to refresh this page. Okay, so it might take some time. So let me do one thing. Let me pause this video for a while, and in a minute or two, let me check if this is up and running. Okay, so it has taken around five minutes for the log group to be created. As a result, it is created. Now let me open this particular log group. And this is the interface for the public, for instance, and you see that these are all the flow logs.
Now let’s just tune it to the last five minutes. It has already been five minutes since we last paused this video. But one very interesting thing that you will see is that there are a lot of unknown IPS that are trying to connect to my public instances. Very interesting. So let’s do one thing. Let’s take this IP address. Let me do an IP trace. I say “IP trace.” As I previously stated, there is an intriguing chemistry between hackers and the EC, which you can see in VPC flow logs, and you can see it is from Hong Kong, I believe China. So, in general, a large number of packets are rejected, many of which come from various countries, including China. China is at the top of the list. So, returning to our main topic, let’s choose one or over here. So ideally, many enterprises just block a lot of Chinese subnets because there is a tremendous amount of malicious traffic coming from China. So if you will notice, let’s see one, six, because this is my IP. And if you will remember, what we tried to do was do a telnet on port 3306, which is basically the MySQL port. And here you see that the VPC flow log is saying that someone from this particular IP tried to connect to port 3306, and that was rejected. That means the security group blocked this particular packet. Now, it is very important for you to understand what exactly each and every field within the log means, even in the exam. So let’s do one thing. Let’s go back to the presentation and understand each and every field from the log file. So what I have done is copy the sample VPC flow log, and let’s understand each and every field over here. So the first field is the version, which is a VPC flow log version, which is two.
The second field is the account name or account ID, which is here. The third field is the interface ID. The fourth field that you see over here is the source IP address. So this is the IP address from which the packet is coming from. The next field over here is the destination private IP address of the EC2 instance. Just remember, this will always be a private IP address of the EC2 instance. Next is the source port, and this is the destination port, followed by six. Six is basically the protocol number. So TCP is basically denoted by the number six. The number of packets transferred comes next. As a result, the number of packets transferred was immediately followed by the number of bytes transferred. And the next two fields over here are the seconds, which are the start time and the end time in Unix seconds.
And the second-to-last field is basically the action, which can be either accept or reject. In our case, it is x reject, and the last field is okay, which is basically the log status. It means that this particular entry is stored in the VPC flow log. So, two important things to remember about the exam are that you must be very familiar with what each and every field here means. That’s one thing. Second, keep in mind that flow logging can be enabled at the interface level, the subnet level, and the VPC level. So let me show you the interface level. So if I go back to the EC instance, there is an Et edge zero interface. Let me click here. Since we have already enabled it at the VPC level, what will happen is that the VPC will automatically enable it at all the interfaces that are connected. So this is one interface, and as you can see, the flow log is already active. So, in general, you can enable the flow log at the interface level, the subnet level, or the VPC level. So these are the two things that are very important in real life as well as from an exam point of view.