AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) Certification Video Training Course
AWS DevOps Engineer -  Professional (DOP-C01) Training Course
AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) Certification Video Training Course
14h 5m
141 students
4.0 (81)

Do you want to get efficient and dynamic preparation for your Amazon exam, don't you? AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) certification video training course is a superb tool in your preparation. The Amazon AWS DevOps Engineer Professional certification video training course is a complete batch of instructor led self paced training which can study guide. Build your career and learn with Amazon AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) certification video training course from Exam-Labs!

Start Course

Student Feedback

4.0
Good
30%
42%
28%
0%
0%

AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) Certification Video Training Course Outline

SDLC Automation (Domain 1)

AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) Certification Video Training Course Info

Gain in-depth knowledge for passing your exam with Exam-Labs AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) certification video training course. The most trusted and reliable name for studying and passing with VCE files which include Amazon AWS DevOps Engineer Professional practice test questions and answers, study guide and exam practice test questions. Unlike any other AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) video training course for your certification exam.

SDLC Automation (Domain 1)

12. CodeBuild - Docker, ECR & buildspec.yml

So if you need more examples of what buildspec dot YAML files can be, there are tonnes of code samples you can use and look at, for example, for EFS, for Docker, for ECR, for GitHub, and so on. One that I want you to have a definite look at is the Docker sample for code build, which is how we can use code build to build a Docker image and then push the Docker image into Amazon's elastic container registry, which is the ECR repository. So this is a very common setup, and one that the exam will actually test you on if you get the chance to have this question. So what is it? Let's just scroll down all the way to the build spec. YAML, you could do the tutorial on your own, which I think is really, really good learning. But here we have our install phase, and here we're saying we want to have Docker version 18. So this will make sure that Docker commands can be run in our cut-build container. Then we need to log into Amazon ECR. So ECR must be logged in using a command, and for this we use the AWS ECR getlogin command, which comes from the CLI. So for this command to obviously succeed, we need to make sure that the service role attached to our codebuild has the permissions to access ECR. If you don't remember, let's go back to IAM just to show you. But there was a service role attached to our code build that was being created. So here it is. Let me just close this. Here is could build my webapp could build service role. So yes, this service role was created, and if we wanted to run this build spec gamma file, then we would need to add some policies to it. For example, one on ECR. Okay, next, before we're building, we're doing this prebuilt command, and we've just logged into Amazon ECR, which will allow us to push images to ECR. Then we do the build. And so here the build is doing a Docker build command, and this succeeds because we were able to say that we wanted to have a darker image on this with version 18. And so we're saying, "Okay, Docker build this image," and then we're going to tag this image into the right naming convention using our AWS account ID, our default region, our image repository name, and the image tag. So a lot of the things we see here indicate that there are a tonne of environmental variables. There is the environment variable, account ID, default region, and so on, and we'll see those in the environment variables lecture in this section. Finally, when the build is done and successfully tagged, we need to push it somewhere. Because if we don't push it somewhere, then the Docker container will be destroyed and lost. And as such, we'll do a Docker push, and we'll do a Docker push of this image that has been just tagged, and it will be pushed directly into ECR. Why? because we've been able to successfully log in. So the reason why we log in with the prebuild command and not with the postbuild command is to imagine this scenario where we are able to build it. We build it and then we run this ECR get login," but this fails because we don't have the im permissions. Then we just wasted a lot of time building all these things so that in the end, we're still not able to run this command. So this is why we do this in the pre-build: if things go wrong, we want them to go wrong early. And as such, in the pre-built, we will log in to ECR, and if we can't log in, then the build won't even start. So it makes sense to have the AWS Ecrget login command in the pre-built phase instead of the post-build phase. In the post-build, we're just pushing our darker image into ECR. So this is a very common pattern to remember when using code build and Docker. And I think it's a really good example illustrating a lot of things around security, around IAM permissions, around Docker, and so on. So that's it just for this little example. But if you do have time, please try to practise this example right here. It has been a lot of good learning for you. Alright, I will see you in the next lecture.

13. CodeBuild - Environment Variables & Parameter Store

Okay, so if we go back to our Docker example, as we can see in this screenshot, we had this dollar sign AWS account ID at the default region. And these were code build-provided environment variables. As a matter of fact, you should know that a lot of them are provided by code built for you. So there is the default region; there is a region where the build is running. Okay, so both are used by the CLI and the SDK. Then we get the code build ARN, the buildID build image, whether or not it is succeeding, some km, some log path, and so on. All of this can be used to incorporate more dynamic builds into your code. and that makes a lot of sense. So this is why in this build specification, Yamo has been referenced as using the account ID and the default region directly originating from this. So there are a lot of these environment variables, obviously, and you could use them as much as you want it.So let's go print them. So let's go run the printenvironment command in our code build. So for this I'm going to build specYAML, and in the install I'm just going to run the commands print and print. And here we go. And this will print all the environment variables, and I'll say it's define and it's [email protected]. Excellent. Okay. So now this build spectral has been committed tomaster and also let's go to a repo. And there was one file that we needed to change back to using "congratulations." So I'll edit my index HTML, and now this uses congratulations so the build doesn't fail. I'll just have spaces in here that are unlikely to be empty. I'll just have an [email protected], and it should be a bit quicker to commit. All right, here we go. So now our repository is ready, and if you go back to code build and start a build, and we'll just run this bill right now, we should have all the environment variables being printed into the log. So let's wait a second to see if that works and if the build has succeeded. And so, if you scroll down, we can see that. Yes, in this running command-print environment, we can see all the environment variables available on this Docker container. And if you look in here, you'll see that yes, we could build off token, we could build lock path at some point. I'm confident that if we look for the AWS account's region, so we'll type region, we'll see that, yes, the default region is the EU's one, and the region is EU. So we have a lot of different environmental variables we could use if we wanted. You had some dynamicity in our build, but we could also specify our own environment variables. So let's go back and launch a build. So I'm going to my buildmaster and starting a build for this. And so in here, I'm able to add some environment variables to override. So remember, we could either set those into our build spec YAML file, so if we go back to the reference file, we could set those here as environment variables directly into the build spec YAML file, and this could be from environment variables or the parameter store, or we can also specify them directly from within the console as overrides. So we have different choices basedon what we wanted to do. But the important thing here is that these environmentvariables, if we had them into the code, theywould be visible, or even in here, if wehave them in here, they're visible. And as such, if I have here a DB URL, I can say, "Okay, mydburl.com," right? And this is fine. Maybe my DBURL is not something very sensitive. So I'm happy to have this as an environment variable in plain text. But if we want to have a DB password, okay, and we have my super secret password, then this is really bad, right? Because anyone could go into our code build and find our DB password directly as an environment variable, and this would get logged, which would be really bad. So as such, we don't need to have the DB password as plain text; we could have this as a parameter. And so this uses the parameter store. So for this, let's go into SSM, into Systems Manager, and we'll have the parameter store at the very bottom left. So we could create a parameter from thisUI right here to have a name anda value and specify caskey optionally. But I also want to show you how that works directly from the SSMUI. So we'll create a parameter and the name will be, for example, "proddb password." And just an example here, it could be a standard parameter or it could be a secure string, meaning it's going to be an encrypted secret, and we'll use the current account KMS key to encrypt that secret, and the value will be my super secret password. Excellent. And then I'll click on "create parameter." So here this parameter has been created, and it's a secret parameter. It's a secure string, and we can have Cobb retrieve that parameter directly for the build at runtime. So here, instead of using the DB password from here, I will use a parameter, and the name of the parameter is going to be the PRODB password. And the value of it is going to bethe value is going to be this, sorry. And the DV password will be stored as an environment variable. So now what we could do is hope that they'll say okay; for DBURL, you can use this plain text field right here. But for DV passwords, the value is this parameter here. And if we do this and start this bill right now, Could Build will try to access the Systems Manager and retrieve this parameter, but you know that it will not work. Why? because I will be coming into place and saying you're not authorised to do so. So as such, we need to go back into our code build policy and attach a policy, maybe around a parameter store. Is there one? No. So SSM, and we'll say okay. You'll have SSM read-only access, which is enough to provide access to the parameters, and we'll attach this policy. Now that has been attached. Hopefully, if we run our build, then it should work, so let's have a look and see if that worked and if it did. Print our DB URL and DB password into the log, and as such, we had a much more secure bill, so let's wait a minute, and it says that the build has succeeded, so that's really good news, and let's scroll down, and actually, I'm just going to search for it. So we have DB underscore and dburl right here, which is mydburl.com, so this is the manually specified environment variable, and then for GBpassword, remember we did specify a SSM parameter storevalue, and it has now been decrypted at runtime into my super secret password. So, keep in mind that using Parameter Store is the safest way to add environment variables to your codebuild project, and this is something the exam will test you on. So again, as a reminder, understand the difference between environment variables that are going to be plain text and those that come from the Parameter Store, and then these two things will be accessible at runtime by your billspec YAML to use and do whatever you want. So this could be very useful if you need to reach from a database, push to a Docker container repository, or do anything else with secret in your code build. So that's it. I hope you liked it, I hope thatmakes a lot of sense for you. Hopefully it does, and I will see you in the next lecture.

14. CodeBuild - Artifacts and S3

Okay, so next we are going to specify artefacts for our code build. So what are artifacts? When you use code build to run tests, as we just did, you don't really build anything. So as such it's okay, we're just runningtest and and if the could build projectpasses so if the build history is successful,that means that the test was successful. But as the name indicates, it can be used to build stuff. And when we build stuff, we regenerate new files out of it. So if you have a Java projectfor example, you would generate jar files. If you have a list of projects, you would generate a build project at the end. And so this built output is an artifact, and that artefact needs to be uploaded somewhere, for example in S3, to be consumed by other programmes and deployed wherever you want. So let's assume that we are building something, and we'll add a little bit of information into our bid spec. YAML, to say that we are indeed creating some artifacts, even though we're not, we're just doing testing. But it's okay, we'll just specify an artefact anyway. So let's go to our builds, and I will just copy this file, whose name will be my web app artifacts. Okay, so here we are saying any files that are in our repository will become our artifacts. But we could have just said, for example, that only indexHTML is going to be our web app artifact. Really, you're free to do whatever you want. Here we'll just choose every single file, and the author is a and the email is [email protected]. And here the commit message will be added. Codebuildbuilt spec dot YAML artefacts generation So here we specified which files should be part of the artefact and what the name of the artefact is. Click on commit changes, and here we go. The built spectrum is now committed to master in code build. Let's go into our build project. So this is our build project, and we'll go to Edit and edit the artefacts section. So here we can say Artifact One is the Primary Artifact, and it says right now there is no Artifact, which is okay. It says you might choose no artefacts if you are running tests as we did or pushing a docker image to Amazon ECR, which we saw before for that example where there was a docker push in the post-build phase. But now we'll specify Amazon S-3 as the primary place to store our artifacts. So we need to specify a bucket name. So for this, let's go to Services and S3, and we will create an S3 three buckets.So we'll have a bucket, and I'll call it CicdStepHandDevops, and click on Create. And so we have created our bucket. CS CDs defined DevOps. So okay, now we'll just refresh this page to have the bucket appear. So, three. And here are CI CD, Stefan DevOps, and the name of the folder or compressed file that will contain our output artifacts. So if the name is not provided, then default to the project name. So we won't provide anything, and we can enable semantic versioning if you want; we could specify a path so it's a subdirectory, but we won't for the time being. And namespace, for example, with the build ID, every time we build something, a new build ID and an artefact will be created. So maybe we'll use the build ID here and decide whether or not to package the artefact files so that they're uploaded to the buckets or zipped. Maybe Zip is a great option. Then we could disable artefact encryption, but by default they're going to be encrypted. Another great way of enabling encryption for your build artefacts is to go to your S Three bucket properties, and then we enable default encryption, for example, to use AES 256. For example, whenever an object is uploaded into S Three, it will be encrypted using SSE S Three. Excellent. So this way we're sure that everything is encrypted, and we can allow code builders to modify the service roles so they can be used for this project. So remember, this is our could-build service role, and right now it does not have the permission to write to that S-3 bucket. But if we start uploading some artefacts to the S3 buckets, then it definitely needs some IM permissions for it.As a result, in these settings, we allow could build to modify the service roles so that they can be used with this build project. Okay, excellent. Here we could specify a different encryption key or a cache tab if you wanted to have a cache. but for now we're good. And let's click on Update Artifacts. Okay, so this is done, this has been updated, and let's go to IAM refresh this page. And now we should see an additional policy. So if we go here and we look at S-3, yes, now we have read and write access to S-3, and this is great, and this is going to be working for us. Okay, excellent. So now why don't we go ahead and actually start a build? So I click on "Start build," and it will start a new build, and the source is the same as master, and we'll click on "Start build." So now the build is starting, and what we should see is that if it picks up the changes in my build spec YAML file, now we will have new artefacts uploaded into S 3. So let's wait a little bit of time to see if that worked. And so, yes, the build has succeeded. And if we scroll all the way down, we can see that it says, "Okay, now we're doing some artifacts, and we have found 17 files," and we upload all these artifacts, and it succeeded. So that's excellent. And if you look into the face details again, the upload artefacts command has succeeded. So that's all very good news. So now if we go to S Three and refresh, hopefully, yes, this is our build ID, or could be our build ID. So this corresponds to this string right here. And if I click on it, I have this file right here that I can download. And this is a ZIP file. And if I extend it, I will find all the files that are in my code commit repository. So as such, he has created for me an artifact. And so, again, remember, why do we do artifacts? Well, it's to pass something to the next stage. And this is when we'll see code pipelines and so on. How helpful that is. But it's really, really good right now. We've seen how to upload an artefact that can be built into S 3. There was some information needed. And also, we can look at the fact that this object is encrypted using AWS KMS. And so, if we look at this overview right here, this service's encryption is AWS KMS. And here is the KMS ID that was used to encrypt this artifact. And so, that's really reassuring. By default, anything we create and output to S3 will be encrypted. But, just in case, we set up default encryption to have Amazon SSE, which is three-factor encryption, by default in case it could build but not encrypt anything. So perfect. Everything works just fine. And we've just viewed artifacts, so that runs up a really good build. And I will see you at the next lecture.

15. CodeBuild - CloudWatch Events, CloudWatch Logs, CloudWatch Metrics & Triggers

Okay, so now let's discuss how code build integrates with Cloud Watch in different ways. So the first way, obviously, that integrates with Cloud Watch is with Cloud Watch logs. So, when we went into the configuration editor, we had the option to enable Cloud Watchlogs as well as send logs to S3. So, if we go to Cloud Watch logs, remember that we have our code built in our log groups. And anytime a build is run, we get a new log stream with a build ID. And for the build ID, we get all the lines of the log that happen. So this is really handy because every time cloud build steps are built, the underlying Docker container is gone. And so the only trace of the log will be in the Cloud Watch logs, or in Step 3 if we did enable that option. So that's the first integration, and it's actually a very simple one. Next, the integration we have is around Cloud Watch metrics. So we have some that could build specific metrics. And by the way, if you go back to All, there is an automatic dashboard we can set up. And here we get some information around how many bills have been successful, how many have failed, and how many bills have happened over time, as well as the average duration of a bill. So this is a really nice dashboard that's built for us by Cloud Watch, and it shows the different metrics that could be exposed by code build.So what we would use this well, for example, ifwe had too many failures, we could create an alarmon top of the number of bills that have failed. Or if the bills start taking a lot longer than usual and the duration average is too high, maybe something's wrong and maybe we're going to overpay. So maybe we also want to set an alarm for this. And if we have too many billings happening, maybe there's a bug. And so we may want to sound an alarm on this as well. So all these metrics that are exposed by this could be are here.We have them by projects or at the account level, and we get a bunch of them. We get about 14 by projects and 14 by accounts. As a result, you can begin developing some really nice integration and so on. But the most important one is going to be around Cloud Watch events. So there are two kinds of integrations I want to show you. The first one is how we schedule good builds from time to time. So remember here, when I was starting my build, I said "start build" and I clicked start build" to make it happen. But what if you wanted to test our project every hour, for example, so we could go to Cloud Watchevents and create a schedule stating that you will emit a new event every 1 hour? And the target of that event is going to be well-built projects. And so we need the ARN project ARN.So for this, I click on "Learn More," and I can see that the build project ARN is of this form. So let me paste this here. So we need the region ID. As a result, we are in EU West one. For the account ID, you can go to Support Center right here, and if you click on Support Center, you get the account number at the very top. So here is my account ID, and then we need to remove the space and then the project name, which is right here. So the project name is, and we'll return to code build right now. So I copy this and I will paste it. Unfortunately, there's no better way of doing this. So here is my full ARN for my code build project. And here, because Cloud Watch needs to invoke code build, we need to have a new role created for this specific rule. So a new role will be created called "Invoke" that could build, blah blah blah.Okay, configure details, and I'll call it invoke. We could build every hour. And this is what we'll do. And here's how we could build them every hour, using Cloud Watchevents to trigger them, to test our capability frequently. But there is something else; anything that comes out of Cloud Watch, any event in here, could eventually have a target built on it. and so we could have some really cool integrations. Again, Cloud Watch events are used to say, "Okay, whatever happens to my code commit repository, have a target." There will be a better way, such as using a code pipeline, but that is one way to build it. Okay, next, with Cloud Watch events, we can also have stuff happen directly to the code bill service. So if you scroll down and have code build in here, we can react to a bunch of events in code build. So, for example, the build state can change: it can change to "failed in progress," "stopped," or "successful." So maybe when a code build fails, we want to have a target. And that target could be a lambda function or an SMS topic, with the message "Hey, something went wrong with the code build, send a message to that SNS topic." Or it could be a phase change, for example, when it enters into the submitted one. So we can have a look at all the things submitted things.And then we want to invoke a lender function that will post this into Slack; who knows? So all these things, the build state change and the build phase change, could be events caught by Cloud Watch events. And then we could have all of the targets of Cloud Watch events. So we can start building some really cool integrations with this. So I wanted to show you this at a high level so you can see all of the different types of integrations that Cloud Watch has within it that you could build because it is the foundation for your AWS DevOps project. And the exam will definitely ask you how to build this kind of automation. So that was helpful, and I will see you in the next lecture.

16. CodeBuild - Validating CodeCommit Pull Requests

So, in code commit, we can have all of our code on Master, but it can also be on various types of branches, and then we can create a pull request to merge our code from branches into Master. And we've seen how code build allows us to test a source, which could be a branch, a commit idea, or anything else, and then run some tests or build some code, giving us a success or failure state as in, is the code working or not? As a result, a natural integration is how we test branches and how we test pull requests. So the idea is that you would create a branch, a feature branch, and you would push code to it. Then you would create a pull request, and that pull request member is code that's waiting to be reviewed. So if you look at the pull requests that we had, all of them, the ones that were previously closed, okay, we had some changes, and so in this one we could have comments and add some comments and say this looks good or this doesn't look good. But what if you had an indication, as in, they say that it is working, that it is buildable, and that it has been tested successfully, so we want a good build to test our pull request because this will give us a good indication as to whether or not the code is working before we merge it into Master, and that is best practice. So there is a blog on the AWS Devast plug, and by the way, if you go into the exam, I do recommend that you read this blog, the AWS Devast plug, because they have so many good articles that show you the kind of integration and automation you can build on AWS as a DevOps. As a result, this February 11 article demonstrates how to validate AWS could commit a post request using could build and lambda. So it is quite a complicated thing to build, and we're not going to build it in less than five minutes. But what I want to show you is how it works. So let me zoom in here so we can have a look at this diagram. So here is code commit, and in code commit we're working on the master branch, but maybe we'll have a development branch where we do development, and when we're ready to merge our work into master, we're going to create or update a pull request. Okay, then the pull request will invoke a CloudWatch event, and we've seen how CloudWatch events can react to new pull requests being created or existing pull requests being updated. What will Cloud Watch events do? Well, Cloud Watch events will trigger an alambda function, and that lambda function will say, "Hey, we're starting a build." So we'll have a little comment in the pull request here saying a build has started, but this will be done automatically by the lambda function, and Cloud Watch events will also trigger a build. And we've seen this in the past lecture, with how cloud Watchevent is able to have code built as a target. And so we begin to build and code build; we began building from this event, and the build will occur, and it will be a success or failure, correct? In code builds, we see that every build ends up being successful or a failure. And so when this happens, whether we get a build success or failure, it will invoke another Cloud Watch event because it will react to this event that happened. And then Cloud Watch events will invoke another lambda function, and this lambda function will post another comment on the create update post request saying, "Hey, what is the bill outcome?" Did it pass or did it fail? And so you get this kind of automation that, every time someone opens up a new branch and a new pull request in turn, it will activate this entire automation, test the code, build it, and then finally use lambda to see what the outcome of this is. And so, that's really helpful. So it's not something we're going to build right now, but you can see the power of using Cloud Watchevents and lambda functions to integrate all these services together and start building our DevOps automations. So if you go through this entire article, which I think is quite interesting, at the end you will see that something like this happens. So this is what a pull request looks like. And so we have my awesome change, and so we get a comment on change that says, "Hey, the build has started," which is the pull request function that created this. Then, depending on whether the build succeeded or failed, it displays a small badge indicating whether it failed or passed. And then there's a little link to the logs that we can get directly from CloudWatch. So here, no hands-on, just architecture and DevOps. But remember this: We can build this kind of automation using code, commitCloud Watch events, lambdas, and so on. All right, I hope you like this lecture, and I will see you in the next lecture.

Pay a fraction of the cost to study with Exam-Labs AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) certification video training course. Passing the certification exams have never been easier. With the complete self-paced exam prep solution including AWS DevOps Engineer Professional: AWS DevOps Engineer - Professional (DOP-C01) certification video training course, practice test questions and answers, exam practice test questions and study guide, you have nothing to worry about for your next certification exam.

Hide

Read More

Provide Your Email Address To Download VCE File

Please fill out your email address below in order to Download VCE files or view Training Courses.

img

Trusted By 1.2M IT Certification Candidates Every Month

img

VCE Files Simulate Real
exam environment

img

Instant download After Registration

Email*

Your Exam-Labs account will be associated with this email address.

Log into your Exam-Labs Account

Please Log in to download VCE file or view Training Course

How It Works

Download Exam
Step 1. Choose Exam
on Exam-Labs
Download IT Exams Questions & Answers
Download Avanset Simulator
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates latest exam environment
Study
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!

SPECIAL OFFER: GET 10% OFF. This is ONE TIME OFFER

You save
10%
Save
Exam-Labs Special Discount

Enter Your Email Address to Receive Your 10% Off Discount Code

A confirmation link will be sent to this email address to verify your login

* We value your privacy. We will not rent or sell your email address.

SPECIAL OFFER: GET 10% OFF

You save
10%
Save
Exam-Labs Special Discount

USE DISCOUNT CODE:

A confirmation link was sent to your email.

Please check your mailbox for a message from [email protected] and follow the directions.