Amazon AWS DevOps Engineer Professional – Monitoring and Logging (Domain 3) Part 6
August 29, 2023

15. CloudWatch – Unified CloudWatch Agent Part II

So we are back in the second part of the Cloud Watch unified agent configuration. So we just put the parameter into the parameter store. So let’s have a look. We’re going to ssm for looking at the parameter store. So I’m going to open ssm which is aws Systems Manager and we’ll see Systems Manager indepth in a future section. So here we go. We have parameters in here and I’m going to look at this one. One called Amazon cloud Watch linux. And we can look at the value. And this is this massive json that was created by the wizard that is inserted into this parameter. And why do we have this? Well, it’s pretty obvious because we want other easy two instances to boot up and directly fetch the value of this configuration and use it for the Cloud Watch Agent configuration.

So that makes a lot of sense. So how do we use this? Because right now if you go to Cloud Watch for example, and refresh, I mean, nothing happened, we don’t have any logs. And if you go to metrics, we don’t have any metrics. So how do we start things and how do we start things to point at this new ssm parameter? So for this, pretty easy. You have the option to two options. Okay? You can either boot your Cloud Watch Agent from an ssm configuration parameter store name or you can directly boot it from a file in the file path. So two options either we use this file right here that was inserted into our operating system. So we’re saying, okay, look at the content of this file and use this file to boot up.

That would be the second command. Or we’ll do like the cool kids do, we’ll use this one, which is you need to start oops. You need to start the Cloud Watch Agent by fetching a config and it’s going to be a module for EC two and the parameter store name is going to be and we need to insert it from the one we had in here. So I’m going to go to Systems Manager, copy this and back in here right click. And this is not working. So let’s do it the old way. We’re just going to write it out. So it’s called Cloudwatch linux. So Amazon Cloud Watch linux and here we go. And so we are getting it says okay, successfully fetch the config and saved into this template. Then it starts to configure to validate the configuration.

It said it is good and the validation phase has succeeded and the second phase has not succeeded. And the reason being is that we’re missing a file called types db. So what we’ll do is that we’ll have to create this file. So I’m going to do make directory minus p user share collect d. So this is going to create the right directory for me and I’m going to use this as pseudo and then I’m going to do sudo, touch User Share and collect D and then finally types db just to create this file. So now this one has been created. And so now hopefully we can go ahead and restart the Agent. And now, yes, the Agent has been working and everything should be started. So now we should start seeing the Cloud Watch logs and the Cloud Watch metrics into Cloud Watch.

So let’s have a look.So I’m going in Cloud Watch, I’m going to refresh this, and now I see my access log and my error log groups that have been created. So here I’m able to see, yes, my log stream, which represents the access log that I have in here from my apache server. And I could go obviously in here and look at the error log. Excellent. So this log stream as well is great. And then finally, if I go to Cloudwatch Metrics, I could look at cw agents this is Cloudwatch agents. And I get some custom metrics. For example, I can get disk percent use, so we can see how much of the disc is being used by every different type of directories. But also I could go one up and look at this one. And here, this shows me the Ram used. So here 12% of my Ram is used.

And again, this is a custom metric that has been inserted by Mac Unified Cloudwatch Agents into Cloudwatch Metrics. So, really cool. I know a long configuration, I know, but the idea with this new Unified Agent is that the configuration itself is stored into the parameter store and can be changed obviously whenever you want. And all the instances. Maybe using easy to user data would simply fetch this configuration directly from ssm and use this to configure the Cloud Logs agent, the Cloudwatch Unified agent, which is something that’s a little bit easier to do than just going to go ahead and configuring this file to be appearing on your instance magically every single time.

So I think this is quite an improvement from Amazon it’s especially as it provides us a lot of different metrics and a lot of different facilities for luck files and so on. Finally, you may ask me what metrics can be collected by this new Cloud Watch Agent. Well, if you scroll down, you can see that there is a lot of different metrics that can be collected by the Cloud Watch Agent. Could be around cpu. So cpu time, active Guest and so on. So we get some cpu information and a lot of those actually you get some disk free information. So how much disk is being used on my machine, which is interesting. So disk free, disk used as a percent and io time and so on. And if we scroll down, we get some information about the memory. So this is a ram.

So how much is active, how much is available? Available as a percent and free and so on in total and we get some information around the network interfaces as well in there, so we can get some information about the number of packets sent and received and so on. And if I scroll down, we get some information about the processes so how many are dead blocked, idle, paging running stopped and so on. So we get a lot more information and finally swap free used and used percent. So a lot new metrics that can be collected by the cloudwatch unified agents. And for it, it would be super simple. You would need to go into the parameter store and edit the configuration and add on these metrics in here that are collected by the cloud agent and you’ll be all done. So that’s it for this lecture. We are pretty happy with the state of it now. We have two large groups to work with. So we’ll see you in the next lecture.

16. CloudWatch Logs – Metric Filters & Alarms

So now we have the access log of our httpd application into Log stream. And this access log is pretty standard. It gives us the information around the kind of network calls that were done, from which IP, when it was done, what was the get put or so on that was done, the version of Http and the error or the return code of the call, for example, 200 or 400 or 404 or whatever. Okay, so imagine this is our production application and we’d like to ensure that we don’t have too many four or four errors coming out. So we could go ahead and publish a custom metric, but that would require a little bit of code because we need to implement within our application a way to send four or four metrics to cloud watch whenever they happen into cloud watch metrics.

But we already have the log and the log contain this information. So why don’t we go ahead and try to create something called a metrics filter. And this will take the log file and filter it based on some criteria and create a metric out of it. So for example, if we filter for the error code 400, then we get these things and say, okay, these are 400. And what if you do 404? Currently we don’t have one of them. So I’m going to create a second one. I’m going to do slash, this does not exist and press Enter, and this obviously does not exist. So we got a 404. And so if we go into cloud watch logs, hopefully we should see that log line very, very soon. Okay, so let’s go back into the logs, into the access log, one up, sorry in here.

And we’re going to create a filter metric filter. So let’s click on Metric Filter and click on Add metric Filter. So this is at the log group level. In here we should choose a filter pattern so we can see examples. And this could be a way to match log events with 400 Http response. So let’s just use this one, but we’ll say, okay, status code equals not four star 404. Okay? And now we can just test the pattern and it says, okay, out of all these logs lines, you found two matches out of ten. And we can look at the test results in here. So this pattern looks like it’s working, so we’ll keep it as is. And this is great, but there’s obviously a lot of different syntax you could use.

The idea is that if there is anything you can extract out of these log lines, then there must be a pattern for it to extract it and create a log metric filter out of it. So here the idea of saying, okay, look at all these logs and out of this, whenever you have status code equals 400, then please export these and tell me they happened. So we have two out of ten events in the sample log. So we’ll assign a metric to it. And so after creating a metric filter, we’ll create a metric. And so this is the filter name. So this is fine, we’ll keep it as is, and the metrics namespace is going to be called logmetrics, and the metric name is going to be called 404 Not Found. Okay? And this is a metric that is coming straight out of our filter metric.

So we could have advanced settings, for example, the value and the default value. But for now we’ll keep it, keep it as is and create the filter. Okay, so now this has been created and out of this we’ve created our first metric that is based on the logs filter. So now what if we want to have an alarm and saying, okay, whenever three times we get 404 and five minutes interval, then please send us an email. So let’s create an alarm on top of this metric. And as you can see, we go back to cloudwatch alarms on this. And so we just need to select the namespace, which is our custom metric. So logs metric and the metric name 404 Not Found.

And we’ll look at the sum for five minutes and we’ll say whenever you’re greater than three, then you should alert us and send us an alarm to an existing sns topic. Would it be this one? Excellent. And click on Next and say, okay, too many 404, that’s my alarm name. And click on next. And for now we don’t have any data available. So this is fine, it will not trigger, but so let’s create the alarm and it’s been created, so let’s wait for it to get sufficient data and go into the OK state. So here my metric is showing, okay, now and it has one that it points right here. So what I’m going to do is go to this page and I’m going to refr

And what this will do hopefully, is that it will write a few log lines into my Cloud Watch logs. So if I go back to my client watch and go to my access log, and this log streaming here and look for 404, we should start seeing a lot more 404. So as you can see, the log was streaming directly from my instance into my Cloud Watch logs to the Cloud Watch Unified Agent. And so hopefully if I go back to my Cloud Watch alarm now and refresh, I should start seeing more data points if it went quick enough into Cloud Watch. And yes. So here we go. This data point has breached my limits. So now we’re at six, thanks to my metric filters, and hopefully very soon the alarm should go into an alarm state.

I won’t wait for that, but you get the idea. So the whole process again is to go to your logs, click on here. As you can see, I have one filter right here, but I could add different metric filters and by adding a metric filter, then you can create a metric out of it and from that metric you can create an alarm on top of it and that alarm can do anything another alarm can do. So remember this going into the exam the way of creating a metric filter where it could be used used for and the process of creating an alarm on top of your metric from your metric filter and I will see you in the next lecture.

17. CloudWatch Logs – Export to S3

So now we have our logs into Cloud Watch. But what if you want to export these logs, for example, export them into S three? So one thing you could do is click on this log right here, click on Action. And here we have the option to export data to Amazon S Three. So I’m saying, okay, I want to export data from this date to this date. And we could have a stream prefix if you wanted to have only specific streams within this data range. Range. But we’ll keep it as is. And what bucket do we want to send the data to? We’ll choose a bucket, for example. stefan devops course. Excellent. And we can also set a bucket prefix. For example, I call this one cloud Watch Logs exports. And I’ll add slash to it, okay. And click on Export Data. And what this will do is that once you set the right bucket permission.

So let’s just do this right now. We’re going to go ahead in S Three, and we’re going to make sure that Cloud Watch logs has access to our bucket. So let’s go into this bucket in here. permissions bucket Policy in here, the kind of bucket policy we need is going to be looking like this. So let’s go and add this on. So let’s see where we can do it here. Statements. I’m going to add this on. So let’s remove the bits we don’t need. Here we go. So we need to have get bucket SEL from the buckets in here. So I need to say this is my bucket. stefan devops Course so I’m going to copy this in here, and we’re authorizing logs. EU West One we need to have the right region, Amazonairs. com, and then put objects again for this bucket. So I’ll copy this in here. And the ANZ is Buckets Owner full control.

And the service is logs. EU West One so this should be good in terms of bucket policy. Let me save this. Okay, it’s been saved. And now let’s go back into Cloud Watch and export the data. And now this went through because the bucket policy was allowing the Cloud Watch logs service to write to the bucket itself. So back in here, let me refresh this. And now we have the Cloud Watch logs export. And in here, I am able to look at this log file. And in here I’m able to see, okay, this is all the logs that were into this time range that I did specify earlier. So perfect. That was the first way to export data in a three. And I can click here and view all the exports. And these are tasks, so they can take a long time to complete. Okay. And if I wanted to go and automate this, for example, I could go and do my Cloud Watch events.

So I would create a new cloudwatch event. So I’ll get okay, create a rule. I’ll create a rule and this one will be a schedule. I’ll say, okay, every 1 hour, for example. And what you’re going to do is you’re going to invoke a lambda function and then we need to code another lambda function that would do this api call right here to export data into S three. So this would work, but you probably see a few problems. So the first problem is that there’s a like, for example, if my Cloud Watch event rule is every 1 hour, then I will need to wait 1 hour until my log data is delivered into S three. And then we rely on the lambda function. So this is something we have to build as well. And this would work.

Okay, this is definitely a solution to export cloudwatch logs into Amazon estrade and maybe a good one, right? But we’ll explore other ways as well in the future, like shared using a Cloud Watch logs subscription. Okay, but this solution probably will be less expensive. We launch lambda once in a while and we wait for the cloud watch service to directly deliver our log files into Amazon. It’s free, so just a cool automation. I wanted to show you. We’re not going to write the lambda function, but that’s it. Just remember the architecture of this and I will see you in the next lecture where we will explore cloudwatch logs encryption. If we wanted to get our log data a bit quicker into a three, but obviously for a more higher price. Okay, I will see you in the next lecture.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!