Amazon AWS DevOps Engineer Professional – Monitoring and Logging (Domain 3) Part 1
August 27, 2023

1. CloudTrail – Overview

Okay, so we are getting into Cloud Trail, and with Cloud Trail, we’ll be able to track all the API calls made within our accounts. So let’s go to the Cloud Trail and create our first trail. So Cloud Trail will record by default all the actions in our accounts. Okay? But if you want these events to be delivered to an S Three buckets or through Cloud Watch logs, then you need to go ahead and create a trail. So let’s just do that. So I will name this trail Demo Trail, and I will apply it to all regions. And I could only apply to this region as well if I wanted to. So right now I’m in Ireland, so this will be only applied to Ireland if I select no, but I’ll select yes and choose to have the trail from all the regions here.

For the events that I want to have, I can have read only events. So there will be all the describe, the list, the get, and so on, or write only events that will be all, the put, all the create and so on, or the deletes. As for now, I will apply to all, so I’ll get all the reads and the right events. Then for data events, we could get information about what goes on in your esther bucket at the object level API activity. For example, for the API calls, get Object or put object, and that would be the setting here. We’ll do this later on. For now, we’ll just not check it. And you could have the same for lambda. You can look at the invoke API operations for individual functions by tracking Lambda functions from here.

So we’ll leave this unchecked. Then we can create a new S bucket. I’ll call it a list DevOps stefan course. And then cloud trail, maybe cloud trail. Here we go. And for the advanced settings, this is really important. We could set a log prefix, but we’re not going to do this. And then we could encrypt log files with ssc kms. So by default, if you look at this I button, by default, all the log files delivered into S Three are using the default ssc S three encryption. Okay? But it is possible for you to encrypt it with ssc kms by providing an ssc kms key if you wanted to. For now, I’ll just leave it as no. And remember, these are by default encrypted using the default sse three encryption.

Okay? We’re going to enable log file validation. We’ll see what that means later on. And we could deliver sms notifications for every log file delivery. So we could build some DevOps automation. For example, if you wanted to take that log file from S Three and archive it on premise directly, automation with sns and maybe a lambda function. Okay, so let’s go ahead and create this trail. And my trail has now been created, so something I had to modify to make it work. I don’t know. It was maybe a bug. I removed the multiregion setting. I just made it single region and this went through with the multi region. It wasn’t working for me. Maybe it’s a bug.

They’ll fix it a bit later. So we have this demo trail and it applies just to this island region. Logging is on and it’s doing all the red dry events. And we’re not tracking any S Three or lambda functions for now. And the storage location is my S Three buckets in here. And there was already a last log file delivered. So we’re going to check that it’s not encrypted ssd Kms, but it’s sses Three. And we could also configure to deliver the logs to Cloud Watch logs. So let’s do this right now. We’ll do configure and we can say, okay, send it to the Cloud Trail default log group and we’ll click on Continue.

And here we go. So also there is this option after you edit your Cloud Trail trail to send it to Cloud Watch log. So it’s good to know when you do this, you need to create an iam role for Cloud Trail. So we’ll just allow this creation of this I am role. And now Cloud Trails is also able to send my logs to Cloud Watch logs. Okay, so let’s go into my bucket. And in my Svocet, I have a cloud trail. I have a directory so that represents a region. And then we have a date. And here we get our first Cloud Trail trail. So that was delivered. Excellent. We can download it and we can have a look at what this file looks like.

And it is really important for you to know what’s inside of these files. So before we open that file, let’s check the permissions on S Three buckets. So the first decryption so if I scroll down, we see the encryption says aes 256, which is sses Three. So my cloud trail file was encrypted. So this is great. And if I go to my S Three bucket and I want to look at my bucket policy, we can see that from this bucket policy. We have allowed the Cloud Trail service to write to our buckets. So this is how Cloud Trail is able to send data directly into our S Three buckets. It’s through the use of a Bucket policy, which is good when you look at the permissioning model as well.

So let’s open this file now. So I’m within the file and this is one giant json. And so to make it able to be read it clearly, I’m going to format it. This is a shortcut I did to format my documents. And so here we go. Now we can look at what’s inside of this Cloud Trail document. So we have a couple of sections in here. The first of all is going to be an array of records. So this is everything that happened within Cloud Trail okay? And the second thing is going to be who made the request? So the user identity is going to say who made this particular request. So here, this is an assumed role by the ecs service scheduler. And so this is something that was made by ecs.

We can look at the access key ID. So we can also check against that and we can look at this session and get more information around whether it was a role, how was the session issued and so on. Okay, then we get some information about the events itself. Okay, so we can get the event time. So this is when it was done and from where. So the event source, so the source of it was elastic load balancing and it was doing describe target health. So it was doing a small health check on our ecs cluster directly from elastic load balancing. Okay, then we can look at something important which is what was requested. So these are the request parameters.

So it was requested to look at this target group arn, this one. So we get the information and then finally what we get is what was the response? Okay? And so the response is going to be in the response elements. And right now there is null. So let’s try to find a response elements that is not null. So we get some more information. So here for example, we get an assume role and the API that was used is there a service, and it was doing an assume role. So it was trying to acquire new credentials. And so we can look at the response elements to see what was responded. And as you can see here, some credentials were obtained.

And this is the access key ID. And this is the expiration. And here and there we have a session token that was acquired. Okay, and then we also get information about the assumed role user. So this is all really important because we understand a lot of context directly from Cloud Trail and Cloud version gives you the entire story when you look at an API call being made. Okay, so this is pretty good. You can definitely explore that file and so on. But what you need to remember within a Cloud Trail log file is that we have who made the request, when and from where, what was requested, and also what was the response.

Okay, so that’s it. Now let’s get back into our console. Now to finish it off, you need to understand that Cloud Trial doesn’t deliver the information in real time to your Svocet. It is possible for it to have a 15 minutes delay of delivering all these log files into your sree buckets. So this is definitely not something you can do to have real time reaction to events. So this is going to be more cloudwatch events. And finally if we go into Cloud Watch logs so let’s go into Cloud Watch logs in here we have the Cloud Trail default log group. And this gives me a stream of all the API calls that were made so they can be easily searchable.

We have one json in here, and the json we get has the exact same structure as the one we had from the file we just saw before. Okay? And so the idea is that if you integrate this with cloud watch logs, then we can build some metrics filter and so on, and we’ll see what that means in this section to do some advanced, again, DevOps and automation. So, two options to remember. Cloud Trail can send data into industry buckets on the regular schedule or into cloudwatch Log groups as a log file. When we get one log line per event, that happens.

Okay? And finally, if you go to event history, it is possible for you to search within Cloud Trail for specific event names or for specific actions done by whom and so on. So you can say okay. I’m looking for the event name put buckets policy. So here we’re, we have all the event names we can track put Bucket Policy. And here we get the information of who made the Putbacket Policy API calls in my account over time. So this can be really helpful as well. All right, so that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.

2. CloudTrail – Log Integrity

So Cloud Trail delivers a lot of log files into our stream buckets, and they can be used if you have a security event in your company and you want to verify if someone did something right. But if there’s a hacker in your account and the hacker wants to conceal its actions, maybe the hacker will want to change the content of a cloud trail file. And by changing the content of a cloud trail file and removing an event if we were to look at that file app afterwards, we would be not seeing the hacker. So for this, we can verify the integrity of this log file using the cli.

And so there’s a Cloud Trail log file integrity verification using the aws cli, and it will check for the modification or deletion of Cloud Trail log files, the modification or deletion of Cloud Trail digest files, and also the deletion or modification of both of the above. So let’s take this file for example, and I’m going to just simply delete. So I’ll go ahead and delete that file. Okay, now that file has been deleted and what I can do now is also modify another one. So I’ll download one, for example, I’ll download this one and I’m going to go ahead and modify it.

And I’m going to take one of these lines, for example, and say the Role arn is from another account. So I’m just going to mess that around. Okay, now I’m going to save this and I’m going to gzet this file. So gzip and the file name. And now I will go ahead and upload this file back into s three. So I’ve selected my file. I’ll click on Next, next standard, next upload. And here we go. So my file has now been updated. And as we can see now the timestamp is different. So it’s definitely been updated. And so now I’m going to go ahead and run the aws Cloud Trail validate logs file command to check whether or not my log files have the same integrity. So let’s go ahead.

So I’m going to say that the start time is in 2015. So this way I’m sure that it’ll take all my files, even though today is 2019 and Trail is going to be the arn I need to find of my trail. So let’s go ahead and find in Cloud Trail. Within trails. I’ll click on this demo trail and I’ll find the arn somewhere. So okay, we’ll just use this name and we’ll find the account number and that should be enough. So let’s go to support center. This will give me my account number. So here’s my account number and I’m going to go ahead in my command, I’ve added my account number.I’m going to change the region as well. So it’s EU West one and I’m going to change my trail name.

So it’s called Demo Trail. Here we go. So that’s the full arn and I’m going to add the region in as well. So it’s in EU West one. Let’s press enter. And as we can see now, it’s going to validate the files, but we only get one out of one digest files valid. So this is pretty tricky because it seems like we have more log files than what we expect. So let’s go and look into this. Well, if we go to our aws DevOps Cloud Trail s three buckets and we go in here, we have the Cloud Trail directory which contains all of our log files as we’ve seen from before. So let’s have a look. This is all the log files and we also have another directory called Cloud Trail Digest. And this contains the digest of all the log files.

And this file is delivered only once every hour. So what I have to do is wait a full hour before the new digest file comes in and this digest file will provide the cli more information, which in turn will check the Cloud Trail logs in this Cloud Trail directory. So bottom line is I need to wait a few more minutes until I get all this new digest file and we’ll be using that for the next api call. So let’s wait and then I’ll get back to the video. So a new digest file just appeared right now. So perfect. It was delivered 1 hour after the first one. So now if we run the cli command again, hopefully we get more results out of it. So it’s running. And here we go.

So now it is checking every single log file in our Cloud trail. And so this is really cool. The first one is that we get invalid not found. This is for the trail log file that I’ve deleted. So now we see that this one is missing, the other ones are valid. And for the one that modifies we get invalid hash value doesn’t match. So using this file, this cli wearable to look at the integrity of our clutch files and make sure they haven’t been tampered. And so that could be really helpful. And this is something that’s definitely you need to know going into the DevOps exam. Okay, well, that’s it. I will see you in the next lecture.

3. CloudTrail – Cross Account Logging

So one thing that can come up is how to do cloud trial from multiple accounts. And it’s pretty easy. You can have multiple cloud trail accounts delivered into the same S Three buckets. And so for this, we need to create the trials in each account and make sure we set the Bucket policy compatible for multiple accounts. So let’s have a look. If we go to our S Three management console and let’s assume this is our central S Three buckets, then we are able in permissions to set the Bucket policy. And if we have a look right now, we allow cloud trail from our account to write to it. But if we wanted to allow another account to write to our three buckets, we would need to change the bucket policy.

So, fair enough, we can just look at what a bucket policy looks like. And in here, you are able to say put object. And you need to specify the different kind of resource name. And when you have another bucket, another esther account allowed to write your bucket, the only line you need to add is the one that has the other account. So let’s assume we wanted to add this other account to write to our cloud trail svucket. So let’s edit this. We’ll add a comma. So we’ll add resource. I’ll make this an array, and then I’ll close the array. I’ll paste this condition and close the array. So this should be valid json now.

And so now we’re saying, okay, in this AWS bucket that I have here, in this prefix in here. And then we are going to say this accounts. Here we go. This account now is also allowed to write to this bucket. I’ll save this. And now we are able to go, oh, thereis invalid jason, so it probably did something wrong. Let’s have a look. I’m missing a comma right here. Let’s click save now. And this looks good. Now, using this Bucket policy, we are allowing our own account and also this account to write to this S Three bucket. And as such, we are allowing cloud trail to be sending files from multiple accounts into essential S Three buckets.

So this is something you have to see to know how this works. This other link shows you a diagram of what is actually happening when we do this. So all the crucial accounts are able to send the logs directly into the S for buckets, thanks to the Bucket policy. And this is our central account. Now, if you wanted to provide access for account B or account C to give read only access, for example, to this S for buckets, what you could do is attach an im role or an excess policy to allow people to read directly into this log. And so you could have different kind of access policies and different conditions.

And these conditions could say you can only read the prefix that contains your account number so if we go back to sree and we look at these folders, as you can see, here is my account number. So I could just say account A can only access prefix A and account b can only access prefix B. And so we’d make sure that only the people who have access to their own logs can read their logs into the s three buckets. So that’s it for this lecture. But remember, cross account and crossregion is also very important in AWS in AWS cloud trail and something that, you know, for the exam. So I hope that was helpful and I will see you in the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!