1. S3 Websites
Okay, so now let’s talk about S Three websites. Three can host static websites and have them accessible on the World Wide Web. And the website URLs will be very simple. It will be an HTTP endpoint, and it can look like this or like this, depending on the region you’re in. The idea is to begin with the bucket name dot S3.com, followed by the AWS region, AmazonAws.com. And if you enable it for a website but don’t set a bucket policy that allows public access to your bucket, you will get a 403 forbidden error. And we’ll see how we can fix it in this. Hands on. So let’s go ahead and enable our Three Buckets website. Okay, so I am in my buckets, and what I want to do is upload a few files here so that we can start displaying some HTML. So I have created an index HTML file, which is an HTML file. And so you don’t need to know HTML right now. The important part is right here. So we’re just saying this is my first web page. It says hello world and I love coffee. And it’s going to load an image named Coffee jpg. This is what it does. All the information here is in the comments.
We’ll see this in the next lecture. Of course, this is about course. Okay. The other thing I have is an errors.html page, which says there was an error. If we do get an error, OK, this is the only thing. So let’s go back to Amazon Three, and we’re going to upload these two files. So we’ll open my index of HTML and my error HTML and click on upload. And here we go. These files are getting uploaded to my bucket. And actually, I’m getting errors. Why am I getting errors? You should know why. Because if I go to permissions and go to bucket policy, I do have a bucket policy, which forces me to upload the files encrypted. And so what I need to do is remove this bucket policy. So here I am in my bucket as a policy editor. I’m going to take all this bucket policy and remove it. Actually, I can click on Delete and then click on Delete. Here we go. and it’s going to delete my bucket policy. And so now this “bucket policy” is gone, and I should be able to try to re-upload my files. Please note that when you update a bucket policy, it can take a bit of time to reflect on your S-3 buckets. So if you try to re-upload these two files, index and error HTML, right away, maybe you will get an error if you just go too fast. So if you do get an error, try again, and then it should work. Okay, so now we have my coffee JPEG, my error HTML, and my index HTML. So what I’m going to do is go to properties and then static website hosting. And I will say, “Yes, please use this bucket to host a website.”
The index document is the document to get by default. It’s index HTML. And the error document is the document that will be shown in case I have a wrong URL, which is error HTML. For redirection rules, we’ll just leave it empty and click on Save. So now what I get here is an endpoint, which is right here. And this is my website, hosted on a street. So let’s try it out. We’ll go ahead and click on this endpoint, and what I’m getting is a 403 forbidden access denied. It’s pretty weird, right? We did enable the website for static website hosting. We do have an instance of HTML, but we’re still not able to access our files. Well, when you think about it, this website is a public website on the public web. And this SSH bucket is a private SRE bucket. So we need to make sure that this is a true bucket that becomes public so that we can access our files in here. So, to do so, we need to do two things, if you remember correctly. First, we need to change this block’s public access settings to make sure that we enable this bucket as a public bucket. So I’m going to go and edit these settings, and I’m going to unpack everything and click on Save. We now approve this bucket by tapping confirm. We allow this bucket to become public, but this is not enough to make itself public. So if I refresh, I still get a 403 forbidden error. So to make this bucket public, we need to create a bucket policy. What’s the best way to create a bucket policy? Well, it’s using the policy generator. So we want to create a “S” three bucket policy. And because we want anyone to be able to view the website, we’ll have a page here that allows anyone on Amazon SRE to do so. And to view the website, what we need to do is use getobject. So get moving now, finally, with the ARN. Again, we need to get the ARN for our bucket. So let’s just copy it from here, and we’ll copy it there. And we don’t forget to have a slash and a star.
Add the statements and generate policy. This is a bucket that will make our S-3 bucket public. So let’s go ahead and save it. And now we get a message saying this bucket has public access. Okay, you have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your three buckets except when you make a website. Okay? So these blocks of public access are all disabled. And as you can see, the bucket policy is a public bucket policy. So now we have the big, giant public in here. And actually, if you go back to Amazon’s free, you will see as well that this says “public.” And there is a very big warning sign. But that’s okay because we intended for this bucket to become public. So back on our website, if we refresh this page now, we get our index HTML file that is loaded, and it says I love coffee. Hello world. And it did load our coffee directly from the bucket in here. So this one right here is pretty cool. It is working fine, and it is using the full S3 website URL. So name it a bucket—three websites: MyRegion, Amazon.com, and so on. And if I try something else, like Slash Lullsays, there was an error, and this is the error message we get from our error HTML web page, OK, so this is working really well. We have a west S3 website enabled, and I will see you in the next lecture to discuss cross-origin resource sharing or the course.
2. S3 CORS
So now let’s talk about Courser cross-origin resource sharing. And this is a complicated concept, but it does come up in the exam in very simple use cases. But I want to go deep into the cores to really explain to you how it works, because it will make answering the question extremely easy. So what is the origin? An origin is a scheme, a protocol, a host, a domain, and a port. In English, this means that if you type https://www.exactly.com, you are at an origin where the scheme is HTTP, the host is WWE.com, and the port is 443. Why? Because 4 4 3 is an implied port as soon as you have HTTP. Okay, so “cores” means cross-origin resource sharing. So that means we want to get resources from a different origin. The web browser has this security in place.
Cores is basically saying that as soon as you visit a website, you can make requests to other origins only if the other origins allow you to make these requests. This is browser-based security. So what is the same origin, and what is a different origin? Well, for example, if you go to example.com and slash up one or exam.com and slash up two, this is the same origin. So we can make requests from the browser from the first URL to the second URL, because this is the same origin. But if you visit, for example, www.exam.com and then you’re asking your web browser to make a request to another website, example.com, this is what’s called a cross-origin request, and your web browser will block it unless you have the correct core headers, which we’ll see in a second. So now that we know what is the same origin and what is a different origin, we know that the request will not be fulfilled unless the other origin allows for the request using the Core headers. And the course header is this one, which you will see in the hands-on, called Access Control Allow Origin.
Okay, so that’s just the theory. Now let’s go into the diagram; it will make a lot more sense. So here’s our web browser, and it visits our first web server. And because this is the first time we’ve done it, it’s called the origin. So, for example, our web server is at https www.exam.com.OK, great. And there is a second web server called a “cross origin” because it has a different URL, which is https www.other.com.So, when a web browser visits our first origin, it will be asked by the files loaded from the origin to make a request to the cross origin. So the web browser will make what is known as a preflight request. And this preflight request is going to ask the origin if it’s allowed to do a request on it. So it’s going to say, “Hey, cross-origin, the website https://www.exactly.com is sending me to you. Can I make a request on your website?” And the source says yes; here’s what you can do. So the access control origin is saying whether or not this website is permitted. So yes, it is allowed because we now have the same origin here, the green one as we are on the left hand side, and the authorised methods are get and put, which means we can get a file, delete a file, or update a file.
OK, so this is what the cross-origin system is allowing our web browser to do. So this is the course method, and therefore, because our web browser has been authorised to do so, it can issue, for example, a get to this URL, and it will be allowed because the course headers previously allowed the web browser to make this request. Okay, so this may be new to you; this may be a lot, but you need to understand this before we go on to the next use case, which is the S-3 course. So, if a client makes a cross-origin request to our S3 bucket-enabled website, we must enable the appropriate course headers. It’s a very popular exam question. Okay, so you need to understand when we need to enable course headers and where we need to enable course headers. As a result, we’ll see this in action as well. So we can allow for a specific origin by specifying the entire origin name or a star for all origins. So let’s have a look.
The web browser, for example, is getting HTML files from our buckets, and our bucket is enabled as a website. But there is a second bucket that is going to be our cross-origin bucket, which also enables a website that contains some files that we want. So we’re going to do get index HTML, and the website will say, “Okay, here’s your index HTML,” and that file is going to say you need to perform get requests for another file on the other origin. And if the other bucket is configured with the right course headers, then a web browser will be able to make the request. If not, it will not be able to make that request. And that is the whole purpose, of course. So, as we can see here, the course headers have to be defined on the cross-origin bucket, not the first origin bucket. Okay? So this is just for the theory. We’ll experiment to see how we can explain these concepts in a much more practical way. That concludes the lecture; I will see you in the next one.
3. S3 CORS Hands On
Okay, so I am back on my web server, and I want to upload some new files for the HTML index. So I’m going to demo the course and therefore need to uncomment this part of my code. So for this right under “Course demo,” you need to remove the left part of the div and the right part of the script. Okay, this is what you need to do. Then you save it. And as we can see from this file, we’re not HTML experts, but it’s going to do a fetch on an extra page of HTML. As a result, this is the request it will make right now, but from the same source. So fetch an extra page of HTML from the same origin for this extra page. This includes Hey, this extrapage was successfully loaded. Okay? So let’s go ahead and upload these two files and see what happens. So we’re going to go ahead and take these files, so index HTML and extra page HTML, and we’re going to upload those. Excellent, they’re uploaded.
So, if we go to our website and navigate to ExtraPage HTML, we should be able to see that this extra page has been successfully loaded. So this is great; this is working fine. And if we go to our main page index HTML, we can see that right below the coffee photo. This extra page has been successfully loaded. So as we can see here, when we do the same origin request using the fetch command, which is the same origin request, then it works. But what if we moved this fileextra HTML page to a different entry bucket? So let’s go and open Amazon S 3. And I’m going to create a new bucket called the “Bucket of Stefan 2020,” the “Bucket of Stefan 2020 Assets.” Okay, so this is another bucket in the same region. I’m going to grant all the public access because we want to make this bucket public. and I’ll create the bucket. Yes, I acknowledge that it is going to be a public bucket as soon as I add my bucket policy. So I’m going to go in here and add a bucket policy.
The bucket policy is going to be very similar to the one before. I’m just going to change it a little bit. So I’m going to add here, minus assets. Okay, so we have the right bucket name here, and this should make my bucket public. So Verizon says, “Yes, my bucket is now public.” I’m also going to enable this website for static web hosting. And I’m just going to index HTML here, which will be more than enough, and then press save. And finally, I need to upload my extra HTML. So I’m going to upload my extra page HTML in here, and yes, it is uploaded. So now if I open up my three static websites—this is my second website in here—and I do slash ExtraPage HTML, As you can see, it works, so what we’ve done here is that we’ve created a second bucket with just this extra HTML file. And that file is loaded successfully when we use the entire URL. So this is great. Okay, next, in this package here, I’m going to delete that extra page. So action and then delete, and yes, my file has been deleted, and I’m going to go to my index HTML, and here, instead of fetching extra pages here, what I want to fetch the extra pages from is my other buckets. So I’m going to copy this entire URL, and I’m going to paste it here. And so now it’s time to fetch HTTP from my other buckets in my assets buckets at this page right here. Okay, so now we’re going to update this file in the first bucket.
So I’m going to add a file, look at the index HTML, and upload. And now it’s been overwritten. So let’s go to my first web page, and I’m going to enable the Chrome Developer Console. So this is something you can do by going to More Tools and selecting Developer Tools. And we’re going to go to the console, and we’re going to refresh this page. OK, so this is going to give us some information. So as we can see right now, the page hasn’t been refreshed. So this is at the bottom. But now I refreshed this page, and the thing at the bottom disappeared. And we’re getting a lot of errors on the right side. So here are the errors. This origin cannot be accessed or retrieved. So the second origin is unaccessible from the first origin because no Access Control Allow OriginHeader is present on the requested resource. So this website here is not allowed to access the other website here because we haven’t defined the correct course headers. And this is the case I was just showing you in the previous lecture. So for it to work, we need to change the course on the second bucket. As a result, the assets buckets are required to allow requests to be processed from my first origin.
So we’re going to go to my second bucket and go to course configuration. And I’d like to copy this course configuration HTML here. So I copied everything here and pasted it. And in the Allowed Origin field, we need to enter the bucket URL we’re making the request from. So for me, this is my web page. This is our first bucket URL. So, I’m going to copy this and paste “oops” here. Here we go. As a result, we have enabled origin http. Then we have the entire bucket name, and I’m going to remove the last slash just in case. So we’re saying the allowed origin is this. This is one way of doing it, and you can save it. Or we could just have it here if you want to make it very simple and it doesn’t work for you. Just put a star there, and they should work equally well. This allows any origin to get files from this bucket. So I click on “Save,” but the “Star” should do it as well. And now I’m going to refresh this page, and because we have set the correct course headers on the second bucket, then this should work. So let’s reload this page. And yes, this extra page has been successfully loaded. And as we can see, everything worked nicely. So we can also verify this by going into the network. So if we go to the network tab and just refresh this, okay, I’m going to clear everything and refresh this page. As we can see, this extrapage right here is being loaded. And so if I click on this extra page, the request URL is the one we have specified, the method is get, and then we are getting some request response headers, which are access allow methods, get access allow origin and the origin we have set, the maximum age, and so on. So these four headers, right here, allowed this cross-origin request to successfully complete. And this is a lot of information I just gave you, but it shows you how courses work in depth. So going into the exam, you don’t need to know exactly how to configure the course. But remember that if one website makes a request on another website, then that other website needs to have the correct course headers, and they’re defining here the correct course headers for that request to complete successfully. And that’s the whole demo, of course. And I hope you enjoyed it, that it was understandable, and that I will see you in the next lecture.
4. S3 Consistency Model
Now, let’s talk about Amazon’s three consistency models. And Amazon’s three is a system that will eventually be consistent. So Amazon’s three are made of multiple servers. As a result, when you write to Amazon S3, the other servers will replicate data between themselves. And this is what leads to different levels of consistency. See issues.
So you need to know a few rules. You get read-after-write consistency for putting down new objects. So that means that as soon as you upload a new object and get the correct response from Amazon S3, you can do a get of that object and get it. So that means that if you do a successful put, then put 200; 200 is okay. Then you can do a get, and that get will be 200. That means it will be okay as well. This is true except if you do a get before doing the put to check if the object exists. So if you do a get and you get a four or four for not existing, then you do a put. Then you can do a getright after that and get a 4 or 4 still even though the object has already been uploaded. And so this is what’s meant by “eventually consistent.” So with this eventual consistency, you also get it on deletes and puts of existing objects. In English, that means that if you read an object right after updating it, you may get the older version of that object.
So if you do a put on existing, you get to put 200, then you do another put, and finally you do a get. If you are extremely quick, the get command may return the older version. And the way to get the newer version is just to wait a little bit. This is why it’s called eventual consistency. And if you delete an object, you might still be able to retrieve it for a very, very short time. So, if you delete an object and then do it right after a get, you may have a successful get.
So get 200. This is because it’s eventually consistent. If you retry after a second or five seconds, then the get will give you a 4 or 4 because the object has been deleted. So this eventual consistency model is something you should know going into the exam. So it’s very simple. You are read after correct consistency for new object puts and eventual consistency for deletes and puts of existing objects. The rules are extremely simple. Finally, I’ve had this question many, many times in the Q and A. I’m going to answer it right now. There is no way to request strong consistency in Amazon stowing. You only get eventual consistency, and there is no API to get consistency. So that means that if you overwrite an object, you need to wait a little bit before you are certain that the get will return the newest version of your object. OK, so that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.