7. [SAA/DVA] S3 Security & Bucket Policies
Okay, so now let’s talk about Amazon security. So, it’s very complex, but first you have user-base security. So, our IAM users have IAM policies, and they authorise which API calls should be allowed. And if our user is authorized through our IAM policy to access our Amazon bucket, then it’s going to be able to do it. Then we have resource-based security. This is the dreaded “three bucket” S policy. There are bucket-wide rules that we can set in the S-3 console. And what they will do is state what principles can and cannot do in our three buckets. This gives us cross-account access to our S3 buckets. We’ll go over the S Three Bucket Policies in great detail. Then we have object ACL, which is finer grain, where we set the access rule at the object level. Finally, bucket ACL is used, which is even less common. And these two don’t really come up at the exam. Okay, note an important principle. So it could be a user; a role can access an S3 object if the IAM permissions allow it. So that means that you have a policy attached to that principle that allows access to your S3 buckets. Or if the resource policy, usually your “three buckets” policy, allows it and you need to make sure there is no explicit denial, So, if your user is permitted to access your S3 buckets but your bucket policy expressly prohibits them from doing so, they will be unable to do so. Okay?
Okay, so now, as a deep dive on the “Three Bucket Policies,” they’re JSON-based policies. So JSON is a notation language. And so we have here a JSON bucket policy. And this bucket policy here allows public reading on our S3 buckets. So as we can see, it says effect allow principle star. So anyone who takes the action gets objects on the resource, for example, a bucket star. So, for any objects in my S3 buckets, So this is great. This allows public access to our S-3 buckets. So these bucket policies can be applied to your buckets and objects. So both actions are either allowing a set of APIs to allow or deny. The effect is to allow or deny. The principle is the account or the user that this SRE bucket policy applies to. And so some common use cases for sri bucket policies are to grant public access to buckets, force objects to be encrypted at the upload time, or grant access to another account using cross-account sri bucket policies.
So we’ll take a close look at Sri Lanka’s bucket policies. Then there are the buckets for restricting public access. So we’ve seen this firsthand when we get started. So this was a new setting that was created to block objects from being public if the account had some restrictions. So here we have four different kinds of settings for blocking public access settings. We have the new access control list. any new public information, access control lists, or access point policy. So this is going to block objects and buckets from becoming public if they’re granted through any of these methods. Or you can block public and cross-account access to buckets and objects through any public bucket or Access Point policy. So you don’t need to remember these four different settings, okay? It’s just a summary in here. What you need to remember going into the exam is that there is a way to block public access to your Share bucket through these settings. The exam will not test you on each of these settings, okay? These settings historically were created to prevent company data leaks because there were a lot of leaks of Amazon S3 buckets in the news, and Amazon S3 came up with this way of making sure that an administrator could say, “Hey, none of my buckets are public, by the way,” because of these settings. and that was very popular. So, if you know that your buckets should never be made public, leave them on; there is a way to set these at the account level, as we’ll see in the hands on other securities in the S-3 you should be aware of on the networking side. So if you have two instances in your VPC without internet access, then they can access three privately through what’s called the VPC endpoints. for logging audits.
S3 access logs can be used, and they can be stored in other S3 buckets. API calls can also be logged into Cloud Trail, which is a service for logging API calls in your accounts. For user security, you have MFA-delete. So multifactor authentication is MFA, in which case if you want to delete a specific version of an object in your bucket, then you can enable MFA delete, and we will need to be authenticated with MFA to be able to delete that object. Finally, there were some presigned URLs that we saw briefly when we opened that file; there was a very, very long URL that was signed with some AWS credentials. and it’s valid only for a limited time. And the use case for it, for example, is to download a premium video from a service if the user is logged in and has purchased that video. So the idea here is that anytime at the exam you see the access of certain files to certain users for a limited amount of time. Think pre-signed URLs. So in the next lecture, we’ll do a hands-on session on SecurID to see all these various options. So I will see you in the next lecture.
8. [SAA/DVA] S3 Bucket Policies Hands On`
Okay, so let’s have a play with our Three Bucket policies to do. So let’s go into permissions, and the goal for us is to define a bucket policy that we’ll write in JSON. And this bucket policy is going to prevent uploading objects that are not encrypted. So let’s edit this bucket policy. And we have two links. We have an example policy that provides access to AWS documentation. If you want to read about all the possibilities you have for creating the bucket policy, or if you just want to follow along with me, then let’s go into the AWS policy generator for S Three Buckets. Okay? So let’s generate our bucket policy. So first, we need to select the policy type. And the type of policy we want is an “S” three-bucket policy. So please make sure to select the “three bucket” policy. This is very important; otherwise, you will not see the same options as me.
Okay, so we have a “S” three bucket policy here. And now we want to add a statement. So what we want to do here is deny any object being uploaded into Amazon S3 that is not encrypted, using, for example, the SSE S Three scheme. So we’re going to have the effect of denying the principle. So where did they come from? Okay, the action is an upload. As a result, the API name for uploading a file to AWS is “Put Objects.” So we’re looking for objects to put here. And then we need to specify the ARN. As a result, the ARN should be the bucket name and the key name. So let’s go into the SV Management Console, and here they provide us with the bucket ARN because they know that we’re going to use it. So let’s paste it. And so as I’ve pasted my bucket ARN, please make sure to add a star and then, sorry, a slash and then a star at the end of the resource name.
Why? Well, the action that we have selected is called “Put Objects.” As you can see right here, PutObjects applies to objects within the buckets. And so, to specify the interface, we want to apply this to objects. We need to specify the bucket name, then slash a star, and the star indicates any object within that bucket name. So we’re saying, “Okay, deny anyone to upload an object anywhere in my bucket,” and we need to add a condition; otherwise, we would not be able to do many things with this bucket. So we’ll add a condition, and the statement is null. As a result, the null key will look for the S three. So let’s have a look. S three Xamz server-side encryption So this one will look to see if we have a header when we send a file to Amazon at three and the value is true. So let me explain what I did. We’re saying if this header is null, that’s the conviction; if this header is null, then deny, and that makes sense. If this header is null, we are sending the file, and we don’t ask for any kind of encryption. So we’ll add this condition, and this is our first statement. So let’s click on “add statements,” and we’ll add a second statement to repeat it. So we’ll deny from anywhere, and the action will be the put object once more. So let’s find it quickly. Put objects. The resource name has to be the bucket name, “star.” And for the condition this time, we’re going to look at a second condition, and we’re saying strings are not equal. The key is the same as before. So the XAMZ server-side encryption and the value of it are going to be AES 256. So we’re saying if the file is uploaded with the header, but the header value is not equal to AES 256, which is representing the SSES Three type of encryption, then deny it. So we’ll add the condition, add the statement, and here we go. We have generated our policy right here, which I can copy and paste into my S3 console, save changes, and we’re good to go. So here we have defined a bucket policy that denies any object being encrypted if it’s not encrypted with SSC S three. So we can, for example, have a look. So let’s upload an object and see if that works.
So we’ll add a file; we’ll add coffee.jpg, and as we can see, I don’t specify any encryption setting in particular. Okay, so it’s going to go with “none” and click on upload; it failed, and we can look at why it failed. So it failed because access was denied, and this is due to the buckets policy. So this is obviously good because it is what we expected, and if we try to upload the same file, say a coffee JPEG, but this time specify the encryption to be SSCs 3, we will get the same result. So by setting the right header, this should work. So let’s upload it and see. Yes, this has succeeded, and finally, let’s try to upload this file one last time, but by specifying a KMS type of encryption. So let’s go into override KMs using the AOS shortcut key and click on upload; this again fails because it doesn’t respect the bucket policy. So the bucket policy is working just fine, and how did I figure this out? So if I googled “free buckets policy” and “deny encryption,” this would show you the kind of blogs that show you how to write this kind of “let me show you” bucket policy. So this is not something invented. I use the documentation to refer to it for my courses, but I wanted to show you how to generate this policy using the AWS policy generator.
Okay, hopefully this makes sense. Now there are other settings that we may want to look at for security, so let’s go into permissions. So if we go into permissions, we can see that there is a block all public access setting. And so this is on by default, just to prevent any data leaks from AWS spreading into the world. einsteinerupload of. So we’ll see how to do this in a future lecture. To do so, we can also define this block public access settings at the account level. On the left, I can change my account settings to prevent public access. And I can block all public access to all my buckets if I want to by taking this one. So this is one more level of security. Okay? And then finally, for all my objects, if I look at the coffee JPEG, there is something called an ACL, or access control list. So I can scroll down the access control list. And this is something I won’t linger on because we’re not using it and the exam really doesn’t touch it. But Access Control X is a way for you to define objects and read and write at the object level. The term “accountability” refers to the process by which a person or organisation provides information to another person or organization. Anyway, I won’t linger on it because this is not very important for the exam. But just know that ACLs are another way to protect your objects in AWS. So that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.
9. [SAA/DVA] S3 Websites
Okay, so now let’s talk about S’s three websites. Three can host static websites and have them accessible on the worldwide web. And the website URLs will be very simple. It will be an HTTP endpoint, and it can look like this or like this, depending on the region you’re in. The concept is that it begins with the bucket name s3 website and then moves on to the AWS region, Amazon.com. And if you enable it for a website but don’t set a bucket policy that allows public access to your bucket, you will get a 403 forbidden error. And we’ll see how to fix this in this hands-on session. So let’s go ahead and enable our Svocet website. Okay? So let’s go ahead and make this bucket a static website. So first, let’s just make sure to remove the bucket policy we had before. I’m just going to delete it so we can upload objects without any problems. Okay, great. So now let’s go back to our objects. We want to upload an index HTML file, and to find it, you go into the course downloads code, and you will find an index HTML, which is a very simple HTML document that is called my first webpage and says Hello Coffee and Hello World. And this will open the file, which is a JPEG file.
The rest of the code right here is for now commented. This will be used when I demonstrate the course. einsteineruploading up to get together with. Next, we also have an errors HTML file, which just says, “Oh, there was an error, and I’m going to remove this by the way.” Okay, great. So let’s go ahead and upload these two files. So I’m going to add files, and I’m going to upload both the index HTML and errors HTML. Here we go. So both these files have been uploaded, and next I want to scroll down and click on upload. Here we go. So now our bucket contains these four files, and we want to be able to have a look at them. So, we’re going to go into properties and create a website for our buckets. So if I scroll down and I go to static website hosting, I can see it’s disabled right now, but let’s go ahead and edit it. We want to enable it. We will host a static website, and then the index document is called index HTML and the error document is called error HTML, which correspond to the two files we just uploaded. We will save the changes. And now our bucket is ready to be a static website host. So if I scroll all the way down, we can find the bucket website endpoints. So let’s try it out. I’m going to copy this, open a new tab, and paste it. And what do we find? Well, we see 403 forbidden, so we are not allowed to access our buckets. And this makes sense, right? Because if we go back all the way up Currently, the buckets and the objects are not public. And it turns out we’re trying to access them publicly. So how do we do so? Well, two steps We proceed to permissions. First, we want to disable the block public access settings. Otherwise, we’ll never be able to make our bucket public. So let’s edit this and untick “block all public access.” And this is going to allow us to make some objects public and set a public bucket policy.
So let’s confirm it. That’s the first step. The second step will be to implement a bucket policy that allows public access. So let’s go into the policy generator one more time. It’s going to be a “three bucket” type of policy. We’re going to allow anyone, so Principal Star, to do this time, not a put object but a get object. So let’s get an object. And the ARN is going to be again the bucket ARNSTAR, saying anyone can do any act on any object in this bucket. This makes sense. I click on Add statement, generate policy, copy this, and paste it here. And here we go. Now, this bucket policy has made my SV bucket public. How do I know? Well, now, under access, it says “public.” And there’s a little warning sign because AWS doesn’t really want you to have public buckets, and they want you to be very careful anytime you make anything public on Amazon S3. Okay? So, if we return to this URL and refresh it, we can see I love coffee, hello world. And the coffee JPEG file So this works great. If I go to coffee JPEG, I should only be able to see that coffee JPEG file. And if I tap a random URL that, for example, does not exist as a JPEG, I’m going to get an error that corresponds to the error HTML file we have defined before. So all is well. We have made our extra bucket a static website host, which is great. and I will see you in the next lecture.
10. [SAA/DVA] S3 CORS
Now let’s talk about Courser cross-origin resource sharing. And this is a complicated concept, but it does come up in very simple use cases. But I want to go deep into the course to really explain to you how it works because it will make answering the question extremely easy. So what is the origin? An origin is a scheme, so a protocol, a host, a domain, and a port. In English, what that means is that, for example, if you do Http example, this is in Origin, where the scheme is Https, the host is www.example.com, and the port is 443 Why? Because as soon as you have HTTP, it imported 4 4 3 as an implied port. Okay, so “cores” means cross-origin resource sharing. So that means we want to get resources from a different origin. The web browser has this security in place. Cores is basically saying that as soon as you visit a website, you can make requests to other origins only if the other origins allow you to make these requests. This is browser-based security. So what is the same origin, and what is a different origin? For instance, go to Example Comap 1 or Example Comap 2. This is the same origin. So we can make requests from the browser from the first URL to the second URL, because this is the same origin. But if you visit, for example, www.example.com and then you’re asking your web browser to make a request to another example, this is what’s called a cross-origin request, and your web browser will block it unless you have the correct course headers. and we’ll see what these are in a second. So now that we know what is the same origin and what is a different origin, we know that the request will not be fulfilled unless the other origin allows for the request using the course headers. And the course header is Access Control Allow Origin, which you will see in the hands-on. Okay, so that’s just the theory. Now let’s go into the diagram. It will make a lot more sense. So here’s our web browser, and it visits our first web server. And because this is the first time we’ve done it, it’s called the origin. So, for example, our web server is at httpsww example. Okay, great. And there is a second web server called a “cross origin” because it has a different URL, which is https www.other.com.
So a web browser visits our first origin and is asked to make a request to the cross origin from the files loaded from the origin. So what the web browser will do is make what’s called a preflight request. And this preflight request is going to ask the origin if it’s allowed to do a request on it. So it’s going to say, “Hey, cross origin.” The website https://www.dot-example.com is sending me to you; can I make a request on your website? And the origin is saying yes; here is what you can do. So the access control origin is saying whether or not this website is permitted. So yes, it is allowed because we now have the same origin, the green one, as we did on the left, and the authorised methods are get and put, which means we can get a file, delete a file, or update a file. Okay, so this is what the cross-origin setting allows a web browser to do. So this is the course method, and therefore, because our web browser has been authorised to do so, it can issue, for example, a get to this URL, and it will be allowed because the course headers it received previously allowed it to make this request. Okay, so this may be new to you, and this may be a lot, but you need to understand this before we go on to the next use case, which is the S-3 course.
So if a client does a cross-origin request on our S3 bucket enabled as a website, then we need to enable the right course headers. It’s a very popular exam question. Okay, so you need to understand when we need to enable course headers and where we need to enable course headers. So we’ll see this in action as well. So we can allow for a specific origin by specifying the entire origin name or a star for all origins. So let’s have a look. The web browser, for example, is getting HTML files from our buckets, and our bucket is enabled as a website, but there is a second bucket that is going to be our cross-origin bucket, also enabled as a website, that contains some files that we want. So we’re going to do a get index HTML, and the website will say, “Okay, here is your index HTML,” and that file is going to say you need to perform a get for another file on the other origin. And if the other bucket is configured with the right course headers, then the web browser will be able to make the request. If not, it will not be able to make that request. And that is the whole purpose, of course. So, as we can see here, the course headers have to be defined on the cross-origin bucket, not the first origin bucket. Okay? So this is just for the theory. We’re going to go hands-on to see how we can explain these concepts in a much more practical way. So that was it for the lecture; I will see you in the next lecture.
11. [SAA/DVA] S3 CORS Hands On
Okay, so let’s demonstrate how the course settings work in AWS. So if we go to this index HTML file, we can see that there is a course demo part that was commented out. But I want to uncomment it out. So if I take it from line 13 and remove this part right here and remove this part right here, this is going to uncomment the code. So, what does this have to do with anything? Well, this code launches a script, and the script is going to fetch a page called ExtraPage HTML and then return the response text of that extra page HTML. So something very simple, but we make the index of HTML, load another HTML document, and this other HTML document is extra-page HTML. This extrapage has now been successfully loaded. Okay, so this is great. Now we want to upload these new file versions onto Rs 4 buckets. So let’s do it right now. We’re going to go into Rs 4 buckets, and I’m going to upload a file, add files, and I will upload both the extra page and the index HTML that I just modified. I will upload these two files. This is working great.
Let me go back to my bucket, and I’ll close my website. So I’m going to reopen it. And if you look at my first webpage now, it says, “Hello, World.” But at the bottom, it says this extra page has been successfully loaded, and this is exactly what we wanted. And if I do ExtraPage HTML and press Enter, we just get that little bit. This extra page has been successfully loaded. So this has worked because both my index HTML and my extra page HTML files are in the same bucket. As a result, they share a common ancestor. Okay? But now we want to test for a course query. Okay, so different origins. So for this, we are going to create a new bucket. So let’s go back into Amazon, which is free, and I’m going to create a bucket. And this one. I’ll call it the Demo Stefan course. 2020. Okay? and I will choose a region. So, Ireland. And then I will unblock the public access because we want the files within the bucket to be publicly readable. So I acknowledge this as good and create the buckets. So I’m going to configure that bucket to be a website. So I will scroll down, and at the very bottom I will say you are a website. So I enable it, host a static website, and I will just have the same settings as before. Even though we are not specifying these files, this doesn’t really matter. So this is good. The next thing we need to do is update the website, so if I go to it now, it won’t work. The next thing you have to do is underpermit, create a bucket policy, and make it public. So I’ll go back to my policy generator, or I can just copy it from the one I have here. So let’s copy this bucket policy. It’s going to be easier. I pasted it, and please make sure to change the resource name right here to the one we just created. Okay, so this is good. This has made my bucket public.
And so if I go now to this bucketUrlxtrapage HTML, we are getting a 404 not found. This makes sense because I haven’t uploaded the file yet. So let’s upload the file, extra pair HTML, and upload. Here we go. This has been uploaded. So now if I try to finally reload that, we get this extra spread that has been successfully loaded. So what I’ve done is that I’ve set up two websites. I’ve created my first website as well as a second website that only contains the extra page HTML. And now I’m going to go back into my main bucket, and I’m going to, first of all, remove this extra page of HTML. So I’m going to delete it and type delete. Okay? And so now if I exit and go to my first web page and refresh this, I’m going to get an error because, yes, it can’t load the extra page of HTML. And the second thing I have to do is edit my index HTML file to fetch this document from the new bucket URL. So let’s copy the full URL right here and paste it here. Okay, so now we have HTTP, the name of the target buckets. We want to load our assets from ExtraPage HTML. Now I’m going to reupload that file into our main bucket. So I will upload it to the file, and it’s the index HTML. Press upload, and it has been successfully uploaded. So now, if you were to be intuitive, you’re saying “fetch this file from this new bucket,” so it should definitely work. And this is where cores come into play. So to demonstrate, of course, what I need to do is actually show you the Chrome developer tool. So on the top right corner, there are more tools and then developer tools.
And this is something I like to have on the righthand side just to show you what happens in the console and in the network while we do this request. So let me refresh this page right here, and I refreshed it. There are no more errors. But now we’re getting an error message saying access was denied to this page from this origin. So the second bucket from the first bucket has been blocked by the course policy because no access control allowed origin header is present on the requested resource. So this is where we need to set the course policy for the second bucket. Okay, so this is the error we get. And now let’s change the course on the second bucket to allow the first bucket to make a request to it. So how do we do this? We go to the second bucket. So this one, and I’ll scroll down to find the cross-origin resource sharing course settings under permissions. And now we have to define it in JSON. So I will edit this. It used to be XML, but now it’s JSON. So we need to paste in the correct code to allow my web page to work. So thankfully enough, I have coded this for us. So configure JSON, copy it, and paste it. And you need to, for a loud origin, put the URL of the first bucket with “Http” without the slash at the end. So let’s take this website, which is the one we’re doing, and paste it right here.
So we do have the http, and then please remove the last slash at the end. So we allow this origin to make a request on the second bucket. I understand this is a bit advanced. Obviously, if you don’t want to follow along, that’s fine, but just remember the idea. Of course, obviously. But let’s finish this hands-on. So far, the course has been edited, and that should be it. Now if I go back to my first web page and refresh it, we get no more errors, and yes, this extra page has been successfully loaded thanks to the course setting. Okay, we can also check this by going to the networks tab. As we can see, this extra HTML page has been successfully loaded, and if we look at the headers of this request and the response, we can see that in the response headers we have the access control allow origin header, which contains the full Http URL of the first bucket. And these headers are what are allowing the browser to load the second URL from the second bucket. Okay, so I understand this is quite an advanced hands-on, and you may not be an IT expert, and this is fine; you don’t need to remember all the steps of what I did. Just keep in mind that if website number one needs to access a resource on website number two via a web browser, website number two must have a course setting to allow the request to be made, or the web browser will block it. And this is what I want todemo you through the heads on. So that’s it. I hope you liked it, and I will see you.
12. [SAA/DVA] S3 Consistency Model
So how does S-3 consistency work? Well, as of December 2020, there is strong consistency on Amazon’s Three for all operations. And this is something that is different from before. So just remember that. Now it’s very simple. All operations are strongly consistent. What does that mean? Well, that means that after you write a new object, it is free. Or, if you overwrite or delete an existing object, any read after write will return the object that was just written. And that used to be the case, but now it is. And anytime you make a list, then you’re also going to see that item on the list. So it is something new for Amazon to offer for free as of December 2020. So just remember that. Now, the Amazon industry is strongly consistent. So anytime you do a write, then you do a read, the read is going to give you the same results as what you just uploaded or what you just wrote. So this is perfect, and it is available to you at no additional cost and without any performance impact. So that’s it. Just a very short one on this, but I will see you in the next lecture.