Amazon AWS Solutions Architect Associate SAA-C03 Topic: AWS CLI, SDK, IAM Roles & Policies
December 14, 2022

1. AWS CLI Setup on Windows

Okay, so we are going to install the AWS command line on Windows. So for this, we do a CLI install of Windows on Google, and this will give us the latest link. And we want to install the AWS CLI version 2 on Windows. Okay, this is the latest. And so we’ll get up to date. It doesn’t change much for service. Version one has just some improved performance and capability, but the API is exactly the same, and there’s also an improved installer. So I’m going to scroll down in here. And to install Windows, we can simply use the MSI installer. So I just clicked on this link to download the MSI installer. Then I’m going to run the installer. So it should be very, very simple.

Now the installer is starting. I click on Next, accept the terms of the license, click on Next Install, and then wait for the installer to be done. Yes, I want to allow whatever this is doing. Okay, so the installer is now complete. I click on Finish, and now I can go ahead and open a command line. So I’ll run a command prompt. On Windows, we go. And to be sure that it’s fully installed, I just tap the AWS version and press Enter. And if you get a result similar to this AWS CLI, a version that begins with a two, then Python Windows, it means that your AWS CLI is not properly installed on Windows, and you’re good to go. Finally, it’s just important to note that if you want to upgrade your AWS CLI, then you just need to redownload that MS installer and just rerun the install, and it will be automatically upgraded. But as soon as you have this output, you’re good to go. You can also follow me in the next lecture. So, until the next lecture. 

2. AWS CLI Setup on Mac OS X

So let’s get the Emacs CLI installed on Mac. And for this, we’re just going to go on Google and make sure to choose a link for installing Arc version two on Mac OS. And then we’re just going to follow the process. So we’ll scroll down and see what they say. And here is how to install it: So we can just download a pkgfile, and it’ll be a graphical installer. So you download the pkg file, then you click on “Continue,” “Continue,” and “I agree.” Then you say okay, install for all the users on this computer, click on Continue, and then click on Install.

And this goes ahead and instals the CLI on Mac. So we wait for everything to be done. The files are being written. Okay, the installation is now successful, and we’ll move the installer to the trash. To put this to the test, launch a terminal on your Mac. So you just go ahead and type, for example, “Terminal.” This will result in a terminal app. Mine is called Iterm on Mac, which is also a free terminal you can use. And then you just type “AWS minus minus version.” And if everything goes well, it should return the version of the AWS executable. So let us wait a little longer before removing 2.0 point ten. So that means that everything has been installed correctly. So that’s it for this lecture. Please refer to this guide if you encounter any problems. It will have the answer for you. And that’s it for me. I will see you at the next lecture.

3. AWS CLI Setup on Linux

Okay, so let’s proceed with installing the AWS CLI on Linux. So for this, I just Googled it, and I chose to install the Air CLI, version two, on Linux, because this is the latest one. and I’m going to scroll down. And we only need to run these three commands to install the CLI. So the first one is to get a zip file. So copy this, go into a terminal, and then I will paste it. And here we go. So this has been pasted, and the installer is currently downloading. The next thing you have to do is unzip it. So I copy this command and paste it, and this will unzip my installer. Great. And the last thing is to run the installer as root. So I’ll do sudo and then install. This should prompt me for my passwords, which I entered right now. And then the installation should proceed. Now it says you can run the user local bin -minus version, or there is simply the -minus version if your user local bin is in your path. And there we go. The AWS CLI has been installed. As you can see, it says CLI at two points. And then you’ll get a different version based on when you do this. And then you get Python and Linux and boot. So you’re good to go. When this works, we can run any command on the oscilli, and you can go ahead with the rest of the lectures. If you get any issues, please read this, and this will tell you what’s going on. And that’s it. I will see you at the next lecture.

4. AWS CLI Configuration

In this lecture, we are going to configure the MS CLI. So I want you to do it properly. That means that your computer is just getting an idea of how it works. Your computer will access the LS network and this account using the CLI, the Common Language Interface, over the World Wide Web. And so when you connect your computer to the CLI and to your AWS accounts, it will check for credentials and permissions. And so we’ll learn how to set up these credentials and how to protect them. And also, I want to say right now: do not share your AWS access key, secretly, with anyone. They’re just like your username and your password, and you should not share them or show them to anyone. In this lecture, I’m going to show you mine, but right after, I’m going to invalidate them. Okay? So let’s get started. Now that we have set up our LED CLI, we need to configure it. And to do this, we need to get access credentials from the IAM service.

So let’s navigate to the IAM service, click on Users, find your user, and then click on Security Credentials. So here is basically how you can log in and get your security credentials. Access keys are what we will input into our AWS configure command. So the first thing you have to do is create an access key, and it says Success. This is the only time that the secret access key can be viewed or downloaded, and you cannot recover them later. So the idea is that you should keep these keys somewhere safe. So you click on “download a CSV file” and so on. These keys are super secrets.Right now I’m showing you mine, but I will make them inactive right after this lecture. Otherwise, you can use my account. So what I have to do, and don’t do this with colleagues, okay, is get the access key ID and the secret access key. So let’s go ahead and configure our AWS CLI. I’ll keep this open just for a little bit. We go to the CLI and type “AWS configure,” and this basically prompts us for the access key ID. Fairly easy.

I can take one from here and paste it, then press Enter. I also need the secret access key, which I can copy from here and paste it.Then you get to the default region name. And this region name is wherever your AOS accounts are. For me, I’m operating in EU West Three, but based on where you are in the world and what you’ve been learning so far, you can choose whatever region you want. Finally, the default output format is none. We can leave it as the default.

Press Enter and here we go. If you type AWS CLI configure again, you’ll see that you have an access key ID, a secret access key, a region name, and the format, which you can change if you want. When you run the AWSconfigure command, it goes ahead and creates some files in the AWS store folder on Linux and Mac. And it’s also called the US, but it’s somewhere else on Windows, and you can just type AWSconfigure Windows just to see where it is.

 So it creates two files, one called “config” and one called “credentials.” So if we just look at what these files are, if we look at the config file, it says by default that the region is EUs three.And that means that my AWS CLI calls will, by the way, default to this region.

Now, if I catch the other file, if I want to print what the other file is, the credentials, we can see that by default it will use this area access key ID and this AWS secret key. So these files are not to be shared with anyone, of course. And that’s it. So this is how you configure your CLI. Just so you know, you must now invalidate your configuration. So what I’m going to do is basically delete everything. So I’ll delete this one. By the way, if you only want to make an active for a short period of time, you can do so. But I’ll go and delete these two. So, basically, you cannot access my email account. And what I will do is create a new access key and use this access key to configure my A CLI. So, once you’ve completed that, we’re all set to begin practising with the CLI. So see the next lecture.

5. AWS CLI on EC2

So now we have set up AWS CLI on our local computer, and as we’ve seen, we’ve been using our credentials. Now a very common use case is to also run the AWS CLI on EC Two.

Now, there’s a wrong way to do it and a right way to do it, and I don’t want you to even try the wrong way. So by the way, what? You could just run the same command and configure Et 2 the same way you did with it, and it would work. You will also include your personal credentials. But that is super insecure. So here’s me putting on my security hat again. I just don’t want to be your dad on this, but I kind of have to say it. Never ever put your personal credentials on an Easy-T2 machine. It’s really, really bad. Why? Because your personal credentials, as the name indicates, are personal and only become active on your personal computer. So you never put anything personal on an easy two.

You almost never have an easy two. If your EC2 machine is compromised, your personal account is also compromised. And if your easy two is shared between many people and other people use your credentials, then they may perform risky actions in your set and really impersonate you. Who will hold them accountable if they do something truly heinous? Okay? So for EC 2, you never put your credentials there. Don’t even think about it. There is a better way. And now it’s time to introduce it to you. In roles, it’s known as AWS. So we’ve seen “I am” before, right? But now we are going to concretely play with them. So there were Im users, Im groups, and Im roles. So my roles are linked to applications or simple instances. And IMRs can come with a policy. And under that policy, we can basically define what an EC2 instance should be able to do, and it has no rights by default. So now the diagram is dead on. The Aus network will have EC2 instances and will want to communicate with the AIS accounts using the CLI.

However, the EOS instance will now have an “im” role, which is essentially an “im” role. There is going to be some magic happening; you don’t even have to worry about it. But then your Ibis account will check the credentials and permission of that role, which makes everything more secure. You’ve never put your credentials on the ECUinstance because it has its own role. Okay? These profiles and roles are so simple to use that instances will use them automatically without any additional configuration. This is the best practise on sand; you should do it all the time. The exam is very clear about it. Any time there is an instance that needs to perform something, never ever think about putting your credentials in; always use the impersonation that should be automatic in your head. So let’s go ahead and practise this.

OK, so I am in my console, and the one thing I want to do is go to EC 2. I’ll also open a new tab just to keep things in sync, and in this tab I’ll open I am. Okay, so the first thing to do is go to our running instances, and as you can see, there’s no running instance because I’m in the wrong region. So let’s make sure I go back to EU Paris, and here are my running instances. Okay, one of them is running, so what I have to do is SSH into it. So I’ll go and find the public IP, and I will SSH into it. For this, I will run the SSH command and specify the right IP. I’m in my Auras instance now. Now if I type AWS, as you can see because we have used Amazon Linux 2, it’s actually nice and we don’t need to install the AWS CLI on it. It is actually already installed. If you do have a version, we should get information about the version of the CLI.

As you can see, it’s a little bit older than what we’ve installed on our computers, but it will still work just fine. So the command is working now, as we can see before we do AWS configure, but as I said, it is really bad to put anything in the alias access key ID. So I’ll click on Enter because we don’t want to put anything saying we will not put our secret access key, and we’ll press Enter again. Now, for the default region name, I can definitely say “EU West Three,” because this is where my instance is from, and then I’ll press Enter to default outperform. So as you noticed, I have not put my access key ID or my secret access key there.

Okay, that’s it for the security bits. So now what if we want to run AWS S3 LS and press Enter? We get an error saying we are unable to look at credentials, and it sort of misleads you into thinking you can configure credentials by running AWS configure. Now, as we said, if we run AWS configure, it prompts us to enter our own personal credentials. So that’s bad. There is actually a better way: use instance roles. So, when we look at this example, we can see that it has no import role. You can see that this instance has no id attached to it. Now we can attach an impediment, and we will do so right now. So when I am, I will go to roles, and it will create a role. Now for this, I need to select the type of trust entity I want to attach to this role in an AWS service.

As you can see, you can attach roles to a lot of different AWS services. Basically, roles in AWS are used so that you can have any service have its own set of permissions. So for us, the most popular are going to be ECU and London. But for now, we’re dealing with EC 2. So click on EC 2, click on Next, and then we can look for policies. So these are managed policies we can attach, and let’s just filter for S Three. So we’ll just look at S-3, which has read-only access. So I’ll go ahead and attach this for three read-only accesses. Then I will click on “Next” on the blue button right here, which is the role name. I’ll call it my first EC2 Two Role.And the role description can be anything you want; I’ll just say it allows EC2 to make calls to Amazon S3. And these are read-only calls. Okay, because we have attached read-only access. That sounds right. I click on “next.” And now my first ECTU role has been created.

It is right here in the last line. So let’s go back to the ECQ Management Console. You can perform instance settings, attach and replace an IAM role by right clicking. As a result, we’ll attach the My First EC Two-Roll, which was obviously suggested right here. Or we could have created an IAM role straight from this console. When I click “Apply,” it says that the import operation was successful. Click on “close.” And now, if we scroll down again, we can see that there’s an Im role attached to our instance. It is my first EC2 role. Return to our console and enter the alias S Three LS. And as you can see now, the command has succeeded.

The bucket SFAN and the other bucket are listed on. We can also go down deeper and make sure that we can list files within my bucket. and it seems to work. I can see my beach, my coffee, and my index.dot HTML file. So all these things have to work. But what if we try to do something a little bit tricky, such as creating a bucket? So step three was to make a bucket. S three. And then I’ll just have some random stuff. Press Enter and now we get a make. Bucket failed. It says an error has occurred. Access is denied when calling the Create Bucket operation. That means our EC2 role does not have sufficient permissions to perform a Make Bucket operation.

And that makes sense, right? Because if you click on it, the only policy I have attached to it is the Amazon S3 read-only Readonly Access.So it’s clear that we can give our EC2 instance the permissions it needs to do its job and address the issues. EC Two does not need administrator rights. It is typically an application that interacts with a few advertising components. As a result, we could tailor a policy specifically for that situation. So obviously we have to see how we can edit these policies and stuff. We’ll have a lab on this in the next lecture, but for now, I can just attach a new policy. And if we look at maybe the S3 full access policy, which is quite a strong policy, I will apply it. Now, our EC2 instance has read-only access and full access.

So now, if you run this make bucket command, as we can see, access is still denied. So this is another very interesting thing about IAM. When you apply a policy, it can take a little bit of time for it to be effective, so it’s not immediate. Things have to be replicated in the global infrastructure of AWS. So that’s expected. Just keep trying to run the command. And as you can see, I just ran it. And now the meg bucket operation has succeeded. Remove the three buckets from your OS. Now, if you just remove that bucket, this should work as well. Okay, everything worked. So this is it. Just to get you started, here’s a primer on how to create IAM roles. This IAM role can be attached to as many EC2 instances as you want. But an EC2 instance can only have one IAM role at a time. And IAM roles are used to permit “mission-easy” instances so they can perform API calls on your behalf. So, as you can see, my easy-to-instance could do a lot of things, but I never used my credentials on the EC2 instance. and that’s much better. So that was helpful. In the next lecture, we’ll do a much deeper dive into area policies.

6. AWS CLI Practice with S3

So now that we’ve set up the areas’ CLI correctly, it’s time for us to practise and see how it works. So for this, we’ll use as an example the S-3 CLI. So I’ll type S three CLI into Google, and it’ll basically give me all of the references for how to use the CLI. So there’s a bunch of comments such as “CP,” “MV,” “remove,” “MBRM,” and “LS.” There are a lot of them, actually.

Okay, so let’s go ahead and practice. But you can find the available commands all right here and click on one. Let’s click on LS. For example, LS is to list S three objects and comment prefixes under prefix for all those three buckets. So for this, we can always go to the example and see how it works. The number one command they suggest we do is SWS 3 LS. And this lists all your buckets in your AWS s three. As you can see, I have two buckets. Now, we can list the content of one bucket in particular. So we can address LS and then give S the three URLs from Stefan’s Bucket. Click Enter. And as you can see now, within my bucket, I have all these files: beach jpg, coffee jpg, and index HTML.

So it appears that we can do a lot in S 3 by using this command. We can always play and maybe download a file. So let’s go back, and we’ll do a copy. So Copy allows you to basically copy a local file or S3 objects to another location, either locally or in S three. So, from your computer to S3, or from S3 to your computer. So let’s take a look at how the examples work. Obviously, as you can see, there are always a lot of options happening. So we can address the three CPs with minimal assistance. And, in general, when you use a help command, you don’t use minus minus. You simply placed advice in three CPs. This will give you a bunch of documentation that you can read directly from your command line. This documentation is identical to that found on the website’s left side. When you’re ready to quit, just type Q.

Okay, so now let’s use the command alias “s three.” So we’ll basically copy a file from S-3 from Stefan’s Bucket, and we’ll have coffee. JPEG file that I really liked, and we have to copy it locally. So we’ll say coffee JPEG and hit Enter. As you can see, my file has been downloaded. So if I do, I can see that I have my coffee dot JPEG file here. I can do LS. So it’s a little clearer now. You know, there are a lot of AWS commands you can try, especially for S 3. Another one that’s pretty common is “MB” for make a bucket. As a result, you can create a bucket on the fly. So let’s just give an example. I’ll just type some random stuff right here so I know no one has a bucket before, and the buckets were successful. So, if I perform EDB’s three LS, we should see three buckets. And alternatively, if I need to remove that bucket, there will be an RB command, and RB is to delete an empty S3 bucket. So, just as an example, I’ll do a little s3 RB and the uri of my s3 bucket, which I’ll just copy here for simplicity. Press Enter. And now my bucket is gone.

If I do LS again, we can see that my bucket is gone. So there are numerous commands available for the AWS create CLI. There aren’t just three; there are all of the AWS services available to you. So, as you can see, all the available services are right here. I mean, there are a lot of them (EC 2), and they’re all auto scaling. I mean everything in LEDs that can be controlled by the CLI. However, it is a very popular one. And in the next lecture, we’ll also explore some other CLI commands. But that’s it. just to get a little practise on the CLI. Obviously, if the CLI wasn’t configured correctly, you would get some errors. But overall, for me, it seems like it’s been configured correctly, and I don’t get any errors. I hope you get the same That’s just a taste of what the CLI has to offer. I will see you at the next lecture.

7. IAM Roles and Policies Hands On

So let’s do a deep dive into IAM roles and policies. This policy is linked to both my Im and my first EC2 roles, so the two policies have different components. As you can see, there is an attached policy that can be managed, or you can create your own. So, if we go to the policy tab, we can see that AWS manages all of these policies.

That means that will get updated over time, but you can also create your own policy, and when you create your own policy, you can choose service actions, resources, and request conditions. You can also import policies, manage them, and so on. So it is very much possible for your organization’s infrastructure to create its own set of policies. Now, there is also another thing you can do, which is that when you go to your EC profile, you can add an inline policy. Inline policies are basically policies that are going to be added in line. So that means on top of whatever you’ve already chosen, and it turns out that these policies are not possible to add to other roles. OK, so this is basically saying that this policy is just for that role.

Overall, I don’t really recommend using inline policies. It is always preferable to manage policies on a global scale in order to gain a better management perspective. So, how do we create these policies and analyse them now? Let’s take a look at Amazon S3 read-only access. It turns out that it gives you a policy summary, which is a nice little table, or you can get a JSON document. So if you look at the policy summary, it says that on S3, you get full read and limited lists. Okay, let’s have a look at the JSON, and it looks like yes, we’re allowed to perform actions, which are “get something” or “list something” on the resource star, and basically, that says that you are able for any Amazon S3 resource to perform API calls that start with the name “get something” or “list something.” So how does that work?

 Well, let’s type. Address the three lists in the bucket API. And now we have a List Bucket API right here. And so, if we look at it, the name of the API is called List Bucket. And so this is why we have List Star. But, obviously, there are a variety of list operations available on AWS, and don’t forget that we can doget objects, get buckets, and so on. If we look at the Amazon S-3 full access document instead, we can see that the JSON document this time allows S-3 star, which means that any API call on S-3 is permitted, and resourced means whatever you want. So the idea is that you are able to specify what you want through this JSON document.

But how do we know exactly how to make these JSON documents? So, let’s try it and make our own policy, so I’ll make one and say that it’s just a practise policy. So we need to choose a service, and we can get a Visual Editor or a JSON Editor. This is basically when you want to type it all out or copy and paste something from the web, and this is where you want to just click and choose whatever happens. So let’s choose a service. For example, we’ll choose Amazon S 3 because we’ve been working with it. And then here, it gives me all the actions that are allowed in S 3. So I can say that all three actions are three stars. We’ve seen this. Or we can say “list,” “read,” “write,” or “permission management.” And if we drill down into something, we can see that within each section, for example, we get all the different operations that can be allowed.

So perhaps we only allow one person to obtain objects. So I just knocked out this one. However, if I click on Read, you will select all 31 possible API calls from me; the same goes for List, and so on. So let’s just allow getting objects for this. That sounds right. Okay, now we can choose resources. And so when we scroll down, we click on Resources, and it says, “Okay, what are you allowed to read?” You’re allowed to read either a specific bucket or all resources. All Resources is a previously seen star. But we can also be specific and add ARN. So to add an ARN, we can look at the ARN we have for our buckets and basically say, “Okay, the bucket name is the Bucket of Stefan, and we can take the N year.” That would basically change the bucket name to Star. So we’ll just leave it to Stefan’s bucket and the object name. You can also click here, saying “any star,” and just add a slash star.

Right here, we click on “Add in here.” I’ve basically created a policy just for this. Then you can specify conditions. They’re optional, and they basically allow you to drill down into the policies. For now, we won’t need them. Let’s add additional permissions if you want, but we actually don’t need them. So we’ll remove it and review the policy. Now that I’ve discovered that I grant read access to this resource for these conditions, I’ll simply refer to it as my “Three Manage Custom Policy.” And we’ll click on “Create Policy,” and now we can click on this policy itself, and we can even look at the JSON. And so the JSON was generated using the Visual Editor, and we can see that we allow S to get objects from this resource. So this visual editor is actually quite nice. Another one you should be aware of is the US Policy Generator. And just Google it, and you’ll get the first link. And this is basically something just like that, where you can create a policy type. So it was an Im policy for us, and we could say, okay, we want to allow three actions on the service, and here we get all the actions we wanted to get objects for, which is what we had on the ARN. For the time being, I’ll leave it at that. To make it simple, we can click on “add statement” and then “generate policy,” which gives us a JSON of what we need.

So these two tools are very similar; they were both created by AWS. So now, I think Amazon wants you to most likely use the visual editor we’ve had in the IM console, but just so you know, there is also this policy-generated tool right here. So from this, you’re able to create your policies. The benefit of creating policies here is that you can see who is using it as well as the version, so you can basically add versions of that policy to ensure that you can always roll back to the previous version if it was two permissions, or you can track all the versions you’ve ever created. Coming back to our roles, we can go and look at our first role, and I’ll just close this, click on our first role, and we can attach our policy. The one we’re just creating was called Test Something Might Test Custom as a policy, and this type is Customer Manage, and we can attach it. So here we go. Now our S-3 EC-2 role only has these three things, and because I managed this one, I’m actually able to probably make it more specific to my EC-2 instance, and it’s probably going to be better security. So that’s it for basically creating policies; in the next lecture, I’ll just show you how to test them.

8. AWS Policy Simulator

So let’s do a deep dive into IAM roles and policies. This policy is linked to both my Im and my first EC2 roles, so the two policies have different components. As you can see, there is an attached policy that can be managed, or you can create your own. So, if we go to the policy tab, we can see that AWS manages all of these policies. That means that will get updated over time, but you can also create your own policy, and when you create your own policy, you can choose service actions, resources, and request conditions. You can also import policies, manage them, and so on.

So it is very much possible for your organization’s infrastructure to create its own set of policies. Now, there is also another thing you can do, which is that when you go to your EC profile, you can add an inline policy. Inline policies are basically policies that are going to be added in line. So that means on top of whatever you’ve already chosen, and it turns out that these policies are not possible to add to other roles. OK, so this is basically saying that this policy is just for that role. Overall, I don’t really recommend using inline policies. It is always preferable to manage policies on a global scale in order to gain a better management perspective. So, how do we create these policies and analyse them now?

Let’s take a look at Amazon S3 read-only access. It turns out that it gives you a policy summary, which is a nice little table, or you can get a JSON document. So if you look at the policy summary, it says that on S3, you get full read and limited lists. Okay, let’s have a look at the JSON, and it looks like yes, we’re allowed to perform actions, which are “get something” or “list something” on the resource star, and basically, that says that you are able for any Amazon S3 resource to perform API calls that start with the name “get something” or “list something.” So how does that work? Well, let’s type. Address the three lists in the bucket API. And now we have a List Bucket API right here. And so, if we look at it, the name of the API is called List Bucket.

And so this is why we have List Star. But, obviously, there are a variety of list operations available on AWS, and don’t forget that we can doget objects, get buckets, and so on. If we look at the Amazon S-3 full access document instead, we can see that the JSON document this time allows S-3 star, which means that any API call on S-3 is permitted, and resourced means whatever you want. So the idea is that you are able to specify what you want through this JSON document. But how do we know exactly how to make these JSON documents?

So, let’s try it and make our own policy, so I’ll make one and say that it’s just a practise policy. So we need to choose a service, and we can get a Visual Editor or a JSON Editor. This is basically when you want to type it all out or copy and paste something from the web, and this is where you want to just click and choose whatever happens. So let’s choose a service. For example, we’ll choose Amazon S 3 because we’ve been working with it. And then here, it gives me all the actions that are allowed in S 3. So I can say that all three actions are three stars. We’ve seen this. Or we can say “list,” “read,” “write,” or “permission management.” And if we drill down into something, we can see that within each section, for example, we get all the different operations that can be allowed. So perhaps we only allow one person to obtain objects. So I just knocked out this one. However, if I click on Read, you will select all 31 possible API calls from me; the same goes for List, and so on. So let’s just allow getting objects for this. That sounds right.

Okay, now we can choose resources. And so when we scroll down, we click on Resources, and it says, “Okay, what are you allowed to read?” You’re allowed to read either a specific bucket or all resources. All Resources is a previously seen star. But we can also be specific and add ARN. So to add an ARN, we can look at the ARN we have for our buckets and basically say, “Okay, the bucket name is the Bucket of Stefan, and we can take the N year.” That would basically change the bucket name to Star. So we’ll just leave it to Stefan’s bucket and the object name. You can also click here, saying “any star,” and just add a slash star. Right here, we click on “Add in here.” I’ve basically created a policy just for this. Then you can specify conditions. They’re optional, and they basically allow you to drill down into the policies. For now, we won’t need them. Let’s add additional permissions if you want, but we actually don’t need them.

So we’ll remove it and review the policy. Now that I’ve discovered that I grant read access to this resource for these conditions, I’ll simply refer to it as my “Three Manage Custom Policy.” And we’ll click on “Create Policy,” and now we can click on this policy itself, and we can even look at the JSON. And so the JSON was generated using the Visual Editor, and we can see that we allow S to get objects from this resource. So this visual editor is actually quite nice. Another one you should be aware of is the US Policy Generator. And just Google it, and you’ll get the first link. And this is basically something just like that, where you can create a policy type. So it was an Im policy for us, and we could say, okay, we want to allow three actions on the service, and here we get all the actions we wanted to get objects for, which is what we had on the ARN.

For the time being, I’ll leave it at that. To make it simple, we can click on “add statement” and then “generate policy,” which gives us a JSON of what we need. So these two tools are very similar; they were both created by AWS. So now, I think Amazon wants you to most likely use the visual editor we’ve had in the IM console, but just so you know, there is also this policy-generated tool right here. So from this, you’re able to create your policies. The benefit of creating policies here is that you can see who is using it as well as the version, so you can basically add versions of that policy to ensure that you can always roll back to the previous version if it was two permissions, or you can track all the versions you’ve ever created. Coming back to our roles, we can go and look at our first role, and I’ll just close this, click on our first role, and we can attach our policy. The one we’re just creating was called Test Something Might Test Custom as a policy, and this type is Customer Manage, and we can attach it. So here we go. Now our S-3 EC-2 role only has these three things, and because I managed this one, I’m actually able to probably make it more specific to my EC-2 instance, and it’s probably going to be better security. So that’s it for basically creating policies; in the next lecture, I’ll just show you how to test them.

9. AWS EC2 Instance Metadata

So one more concept is called the ECQ instance metadata. It’s a very powerful one, and I think it’s one of the least known features to developers. So it’s really good. And when you discover it, you’re like, “Wow, this is kind of awesome.” So let’s go through this. It essentially allows your EC2 instances to learn about themselves without the need for an Im role. And that kind of makes sense, right? Your simple two instances should be able to tell you who they are and what their URL is. And this is something you should remember: 169 254 169 254 latest meta data.

And this is very, very important. This IP address, 169.54.169.54, is essentially an AWS internal IP address. It will not work on your computer. It will only work with EC2 instances. And using this, you can retrieve the Im role name from the metadata, but you cannot retrieve the Im policy, right? The only way to test the policy is to use the policy simulator or the dry run options. But we cannot retrieve the content of the IAM policy using this URL. Just to remember, the metadata is the info about the EC2 instance that we’ll see in a second. Whereas the user data was used to launch an EC-2 instance script. Okay? There are very different concepts, and we’ll be able to access both. So let’s practise and see what we can do with this EC2 instance metadata. So here I am in my easy instance, and the first thing I want to do is curl. So curl is to query a URL, and I will do a curl on 169 254, 169 254. And what we get out of it is a bunch of numbers and dates. This is basically the version of the API call that you’re using. And what I said is that for now, we really don’t care about the API version, and we’ll just use slash latest. Let’s just go right here, latest, when I run latest. And you always make sure to add the last slash. We get two different fields: dynamic and metadata. And actually, right here as well, you probably don’t see it.

That’s the third one. I’m sorry. It is user data. So, as you can see from this, you’re able to retrieve the metadata and the user data. We’re not interested in the user data right now. We’re interested in the metadata. So let’s go ahead and add metadata. Always, never forget to add the slash at the very end. From this, we get a bunch of different options. We receive amiid launch, index, hostname IAM, and so on. But any time it ends with a slash, that means that there is more to it. For example, there is a slash. There is more to it. When it doesn’t end with an aslash, that means it’s a value. So if we look, for example, at the instance ID, we’ll do curl instance ID. And what we get out of it is my instance ID. Pretty awesome, right? We could do the same with the local IPV 4. As a result, we get local IPV 4 for our EC 2 instance. So what you noticed here is that we haven’t been authorised through an IM role to get this information. This comes for free. Any simple instance without an Im role can request all of this information, and it is critical that you learn how to navigate through it. It’s quite helpful when you do automation.

So, for example, if I do hostname, I get the host name. And if I do, I am, as we’ll see in a moment; we’ll have more values to examine, such as info. And there is this one called security credentials. So I’ll just give you some insider knowledge about how things work. Basically, when you attach an easy-to-instance role and you type security credentials, you’re going to get the rolename, which is right here, my first EC2 role.

 So my first EC2 role is obtaining an access key, a secret access key, and a token. So, when you attach an Imrole to an easy to instance, it performs API calls by querying this entire URL right here, from which it obtains an access key ID and a secret access key in the token. And it turns out that this is a short list of credentials. So you can see there’s an expiration date here, and that’s usually something like 1 hour. So the idea is that your EC2 instance receives temporary credentials from the IAM role to which it is assigned. So this is basically how the imroles work in these two instances. I know not many people tell you about this, but I just wanted to pique your interest and show you the complete URL. But again, what you should remember is that using this metadata—not using metadata and this URL right here—when 69 to 54, etc. You can get information about a lot of stuff from your computer, for instance. So I hope that was helpful, and I will see you in the next lecture.

10. AWS SDK Overview

Finally, let’s go right into the address SDK. SDK stands for software development kit. Kit and so the idea is that if you wantto perform actions on AWS, but not from using theCLI, just from your code in whatever language you usein, you can use the AWS SDK. And so there’s an official SDK. And there’s an unofficial SDK. SDK. However, the official SDKs are: Java, Net, Node, JS, PIHP, Python, Go, Ruby, CPlus Plus, I mean, there are many languages that support the address SDK. Anyway, it turns out that if you ever hear someone say “Boto 3” or “Boto Core,” that’s also an alternative name for the Python SDK. So the idea is that I’m pretty sure that as a developer, your language is in there. And if it’s not, I’m pretty sure that if you Google it, you’ll find an Australian SDK for your language. So the SDK is really useful when we start coding and use AWS services such as DynamoDB. So, in fact, the auras CLI itself uses the Python SDK, or BOOT 3. And you may have noticed this.

So the average LI is a wrapper around the Python SDK, and the exam expects you to know when you should use an SDK, but it doesn’t really expect you to use the SDK properly or know how to code with the SDK. There are a few concepts we’ll visit in this class, and we’ll practise them when we get to the lambda functions mostly.So overall, don’t worry. Everything you all should know for the exam will be taught in time. Right now, this is just an introduction to the SDK. Something good to know is that if you don’t specify or configure a default region, then the API codes will default to the US East region when you use the SDK. Now, for credentials, because security is always such a big part of the SDK and the CLI, it’s recommended to use what’s called the default credential provider chain. And, behind that very complicated name, this basically means that when you run Aus configure, it works seamlessly with your AWS credentials. So basically, remember to save that credentials file on your own computer. If you use the SDK, your SDK will automatically look for that file and use the credentials from there.

If you use an easy to machine and imroles, you can use the instance profile credentials, and the SDKs will look for these credentials automatically. Finally, use someplace where you can use environment variables: these arrest access key IDs and a rests secret access key. This is a little less recommended but still useful. These environment variables that I don’t really recommend still work with the SDK. Overall, here is my number-one recommendation. It is obvious, but I’m pretty sure you’ll get asked about it in the exam. Overall, never store your AWS credentials in your code. Okay? Profile credentials should never be stored in your code; instead, your code should be abstracted from these credentials and rely on the configured credentials in your machine or instance. Okay, this is a best practice. If you work with AWS services, you must use IAM roles completely. I’m pretty big on security, so this is why I always have bold, upper-case stuff when it comes to security, because that’s really important, and I don’t want you to have problems later down the road when you use Amazon.

Finally, when you use the SDK, there is something called exponential backup. And so when you use an API call and it fails because it’s been called too many times across too many applications of yours, you go into a strategy called exponential backup, and that’s only for rate-limited API. And so the SDK usually implements a retry mechanism with exponential backup. But I want to show you what it looks like so you are aware of it. Basically, your first API call will, after failure, wait maybe ten milliseconds. And your second API call will run after 20 milliseconds. And the other one, the third API call, as you can see by the double length of the arrow, will be for 40 milliseconds. And then, if the next API call still fails, it will wait double that time. Then, of course, the next one, the final one, will have to wait twice as long. So the exponential back-up means that if your API calls keep failing, we will wait twice as long as the previous API call to try again. And that ensures that you don’t overload the API by trying it every millisecond. Okay, so exponential backup is included in most SDKs, and it’s just something you need to be aware of. So that’s it for the address SDK. I hope that was helpful, and I will see you in the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!