Amazon AWS Certified SysOps Administrator Associate Topic: S3 Fundamentals Part 1
December 20, 2022

1. [SAA/DVA] S3 Buckets and Objects

Okay? So first, to talk about Amazon S3, we need to talk about buckets. So S Three is a system or a service that allows us to store objects. So files go into buckets or directories. And each bucket must have a globally unique name. As you’ll see, in the hands-on, we can’t create a bucket that has already been taken in terms of names. The buckets are defined at the region level. So even though S3 is a global service, buckets are regional resources, and there is a naming convention that includes no uppercase, no underscore, and three to 63 characters. It should not be at IP, and it must start with a lowercase letter or a number.

Okay? Now in these S-three buckets, we need to create objects. And objects are files, and they must have a key. And what is a key? It is the full path to that file. So if we have a bucket named mybucket and an object named myfile.TXT, then the key is my underscore file TXT; it’s the blue part. However, if we have folder structures within our three buckets, so that my underscore folder is one folder, another folder is another, and then my file.TXT is another, the key is a full path. So again, that’s the entire thing in blue. And the key can be decomposed into two things: the key prefix and the object name. So if we take the same long example, the prefix for mobile TXT is my underscore folder in another le, the preheat is the prefix, whereas the object name is my underscore file TXT. So even though there’s no concept of directories within buckets, just very, very long key names, the UI will try to trick you into thinking otherwise because we could create directories within Sri. So what we really have in EST are just keys with very long names that contain slashes.

Okay? So now let’s go again for this object. So the object values are the contents of the body. So the maximum object size in Amazon’s history is five terabytes, or 5000 GB, which is a huge object. But you cannot upload more than five gigabytes at a time. So that means that if you want to upload a big object of five terabytes, you must divide that object into parts of less than 5 GB and upload these parts independently into what’s called a “multipart upload.” Now each object in the Amazon industry can have metadata. So list the key-value pairs that could be system or user metadata. And this is to add information to your objects as well as tags. You can also have key-value pairs and tags, which are very useful when you want to have security on your objects or lifecycle policies. Finally, we’ll see when we go into versioning that there is a version ID on our Amazon S3 budget, and we’ll see what the value of that is in the versioning lectures. So without further ado, let’s get into the Amazon Switch console and do a hands-on.

2. [SAA/DVA] S3 Buckets and Objects – Hands On

So let’s go into the Amazon Simple Storage Console, and there we can view all our buckets and create our first bucket. So I’ll go ahead and click on Create Buckets. Next, we have to give our bucket a name. And the name has to be globally unique across all accounts in AWS.

So if I name my buckets “test,” even though there is no test bucket in my account, scroll all the way down and click on “Create Buckets.” I’m going to get an error saying that the bucket with the same name already exists. So I must choose a name that’s going to be globally unique in AWS. I can do a demo. Stefan’s Three Buckets 2020 Who knows, right? But this is good enough. Next, you have to choose a region. So choose a region that’s close to you. For me. I’m just going to go with the EU, Ireland. West one of the EU. But you have to choose a region for your bucket. But the Amazon S3 console is global. It does not require a region selection. So don’t misunderstand it. S3 is not a global service; it is a global console. But you need to select a region for your S3 buckets. Okay, great.

Then I’m going to scroll down, and we have the block public access settings. For now, I’m going to keep them on, and we’ll discuss them when we define our S Three Bucket policies and some securities around the buckets. And so what I’ll do is just skip these settings because we’ll view versioning and so on later, and it will click on Create Buckets. So our first bucket has been created, and now I can click on it and explore the S3 console together. So when we get there, we get to the bucket-level view, and as we can see, we have object properties, permissions, metrics management, and access points. We’ll view a lot of those in this section. But for now, what I want to do is go ahead and create and upload our first object in Amazon history. So I’m going to click on Upload and then Add Files, and I’m going to upload my file called Coffee.JPEG. It’s my first file, and I’m going to scroll down and look at the different settings we have. So as we can see, we have a destination, which is my bucket right here, and we can view the fact that versioning for now is disabled. We’ll talk about versioning later on. We can view other settings, and we could look at additional options.

But for now, let’s keep things simple and just click on “Upload.” We’ll see a lot of these settings later on in this section. Okay, so the object has been uploaded. And so if I click back on my SV bucket, I can see that the object now shows the number one. So there’s one object in my bucket. It’s a copy of the JPEG file, and it has been uploaded just now. Okay, so now let’s open this file and take a look at some of its details. So how about we try to open this file? So there are two ways to open a file in Amazon’s history. The first one is to click on “Object Action” and then “Open.” And if we do so, this opens a new tab. And this tab is showing us the coffee picture we were expecting. So this is great. The second way to open a file in Amazon history is to use the public object URL, which is right here.

So if I copy this object URL, open a new tab, and paste this URL, what do I find? I find that there is a denial of access denied. So this is a bit weird, right? Well, the idea is that here we are able to access it, and here we are not. Why? So well, in the third tab, so this tab, we’re using the public URL of our objects, and it turns out that our “three bucket” is not public. So right now, this is not public. Therefore, when I try to access any URL on my bucket, it will not work and give me an access denied; it’s because my bucket is not public. But if I use the second URL, it looks similar to the first one. The first part is actually very similar, but the last part is really, really long. As you can see, this is a very, very long URL. This is called a presigned URL, and we’ll learn about it in this section as well. And so, using a prefixed URL, what I’m doing is hat I’m giving AWS my temporary credentials to say, “Hey, AWS,” that I’m not just using the public URL; I’m also passing in my credentials. So this is me, and it must know who I am. So I said, “Okay.” Cool. I can show you this file, the JPEG picture. You can see it here by performing the object action open, which will generate a presigned URL.

Okay, so back into our bucket we go. Let’s try to create a folder. For example, I’m going to create the images folder, and then I will create that folder. And then, within that folder, what I can do, for example, is upload another file. I can add a file and upload my beach JPEGs, for example. And then click on “upload.” And here we go. My file has been uploaded. So now if I navigate a little bit, my buckets, as you can see, I’m into the roots of my bucket, and then I’m into the key named images, and within it, I find my beach JPEG. And if I wanted to open the file the same way we did it before, okay, pretty good. Finally, we can go back one level to see, copy the JPEGs and folder images, delete this folder, and then delete everything within that folder, as you can see, it is a permanent delete. To delete the objects and the folder, type “permanently delete” as instructed. So this has worked. This has been deleted. I can click on “exit” in the top right corner. And here we go. We’ve had our first introduction to Amazon Free, in which we have created a bucket, uploaded a file, viewed that file, and played a little bit with folders and deleting objects. So that’s it for the soft intro. We’ll do a lot more in this section, but I hope you liked it, and I will see you in the next lecture.

3. [SAA/DVA] S3 Versioning

So now let’s talk about Amazon’s three versioning strategies. So your files in Amazon can be versioned, but it must first be enabled at the bucket level, which we will do in the hands-on. That means that re-uploading a fileversion with the same key will override it. But it won’t override it; actually, it will create a new version of that file. So instead of overriding the file that already exists, it will create a new file version, and I’m simplifying it here, but it will be version one, then version two, then version three, et cetera, et cetera. So it is best practise to version your buckets in Amazon history in order to have all of the file versions for a while, because you can be protected against unintended deletes because you can restore a previous version, and you can also easily roll back to any previous versions you wanted. So you need to know a few things, though any files that were not versioned prior to enabling versioning will have the version null, and if you suspend versioning in your bucket, it does not delete the previous versions; it will just make sure that future files do not have a version assigned to them. The term “hands-on” refers to how something is done. 

4. [SAA/DVA] S3 Versioning – Hands On

So next, let’s explore versioning in our three buckets. So for now, we just uploaded a copy of the JPEG, and we didn’t enable versioning when we created the bucket. But let’s go into the property tab, and the first setting in the property tab is called bucket versioning.

So let’s edit it and move bucketversioning from being suspended to being enabled. I will click on save changes, and here we go. Go. Our bucket is now enabled for versioning. So let’s close this and go back to our objects. Great. So this is our first object. But now that we have enabled versioning, we have a new setting here called list versions. This toggle right here, if I click on it, will add a column into the view of AmazonSray that shows me the version ID. And it turns out that for coffeeJPEG, the version ID is null. That means that we uploaded this object before we enabled versioning on our bucket, and therefore it will not have a version ID, hence the term null. But let’s try to upload a new file, and this time we’re going to upload the beach picture. So JPEG beach I will scroll down and upload. Now that the file has been uploaded, let’s go back to our bucket and list the versions. Now we can find that the beach JPEG has a version that is a very long string. And so this makes sense, as we have uploaded a version of the file after enabling bucket versioning, and it’s going to be assigned a version ID. Okay, what about if we upload the beach JPEG file again? So let’s upload a beach JPEG and see what happens.

Okay, back in here, I still have two objects, but if I list my versions now, we can see that the beach JPEG has two version IDs. This is the one we uploaded before, and this is the new version ID we just uploaded right now. So as we can see, thanks to versioning, every time we reupload a file, it will keep all its previous versions as well as the new version and assign a different version ID every single time. So this is cool. Now what happens if we upload another time? The coffee JPEG file So let’s try it out. The coffee JPEG will be uploaded. Great. List the versions again. We can see that the coffee JPEG has two versions. The first version that we just uploaded has a long version ID, and the one from before has a version ID of null. Okay, this is great. Now let’s try to unlist the object version. So we just have these two files right here, and let’s try to delete a file. So, for example, let’s take the beach.jpg and try to delete it. So I click on “delete,” and as we can see, we get information saying deleting the specified objects adds “delete” markers to them.

So we’re actually not deleting the file itself. We’re adding a delete marker. Let’s see what that means. So let’s confirm we want to delete beach.jpg and press delete. Okay, so this is done. The object has been successfully deleted. Let’s exit this, scroll down, and yes, it seems that my beach JPEG has been deleted. But actually, if I click on list versions and toggle this again, well, we can see that beach JPEG is still here. But now, on top of the two previous versions we have uploaded, we have a delete marker. And this delete marker has its own version ID of size zero bytes. And it’s saying to AWS, “Make it seem like this file is gone.” But actually it’s not gone; it just has a delete marker. And so, how can I restore a file? Well, I can just take this and delete the delete marker. Now, deleting a delete marker or deleting a specific version ID is called a permanent delete. So if I take, for example, these two files right here, these two versions, the delete marker and the previous version, because I want to delete two at once, I click on delete. Now instead of just saying delete, it says you need to permanently delete because I’m deleting a specific version ID. So it is a destructive operation. Before, it added a delete marker, but now we’re actually deleting the objects for good. So this cannot be undone. I clicked on delete objects, and my objects have been deleted. Going back into my bucket one more time, what do we see? We see from my beach and my coffee that JPEGs are back. And if I list versions, I only have one version for my beach JPEG, which makes sense. This is exactly in line with the operation we just did.

Okay? And so, thanks to versioning, this allows us to roll back a previous version and restore a previous version thanks to the versioning. Okay. And this is quite cool because, well, if someone goes and deletes a file and we have enabled versioning, as long as it deletes a specific objectversion, we have prevented unintended deletes. And finally, for bucket versioning, you could go and disable it. This is not suspended. So if you do suspend it, this is going to keep all the previous versions that you had from before. The term “responsibility” refers to the act of determining whether or not a person is responsible for his or her own actions. Okay? Now I’m going to keep bucket versioning going because I need it for the rest of my demos. So that’s it for this part. I hope you liked it, and I will see you in the next lecture.

5. [SAA/DVA] S3 Encryption

So, let’s move on to Amazon’s Three-factor authentication for your objects. The idea is that you upload objects onto Amazon’s three servers, which are those of AWS. And so you may want to make sure that these objects are not accessible. For example, if someone gets into the Amazon servers or you want to make sure you are adhering to some security standards set up by your company, As such, Amazon gives you four methods to encrypt objects in Amazon SRATE.

The first one is called SSC 3. This is to encrypt SRAYA objects using keys handled and managed by AWS. The second one is SSC KMS, which leverages the AWS Key Management Service to manage your encryption keys. The third one is SSEC, where you manage your own encryption keys. And finally, client-side encryption Now, we’re going to do a deep dive on all of those, so don’t worry. And it’s important to understand which ones are adapted to which situation for the exam because the exam will definitely ask you questions to help you choose the right level of encryption based on the scenario. It’s a long time since I’ve done anything like that. This is an encryption where the keys used to encrypt the data are handled and managed by Amazon S Three. The object is going to be encrypted server-side.

SSE means server-side encryption. And the type of encryption is AES 256, which is an algorithm. So for this to upload an object and set the SSCs-3 encryption, you must set a header called Xamz Server Side Encryption AES 256. Xamz stands for “X Amazon.” So x Amazon Server Side Encryption AES 256, and this is how you remember the name of the header. So let’s have a look. We have an object, and it is unencrypted. We have it right now, and we want to upload it to Amazon as Ray and perform some SSC-3 encryption. So for this, we’re going to upload the object onto Amazon as usual. You can use the HTTP protocol or the HTTPS protocol, and you can add the header that we said, the XAMZ service encryption AES 256.

And then Amazon, thanks to this header, knows that it should apply its own S3 managed data key. And using the S Three managed data key and the object, some encryption will happen, and the object will be stored encrypted in your Amazon S Three bucket. Very simple. But here in this instance, the data key is entirely owned and managed by Amazon sri next Ssekms.So we haven’t seen what KMS is right now. We’ll see this pretty much towards the end of this course on the security side. But KMS is a key management service, which is an encryption service for you. So SSC KMS is when you have your encryption keys, which are handled and managed by the KMS service. Why would you use km over ssc? Ray: Well, it gives you control over who has access to what keys and also gives you an audit trail. Each object will be encrypted again on the server. And, in order for this to work, we must set the header xAmazon server-side encryption to AWS kms. So the idea is exactly the same because it is server-side encryption. We have the object we uploaded using HTTP and the header. And then, using this header, Amazon knows to apply the KMS customer master key you have defined on top of it using this customer master key. So now that the key has been defined, your object will be encrypted, and the file will be saved in your Sri bucket using the SSE KMS encryption scheme. Next, we have SSEC, which stands for service head encryption using the keys that you provide yourself outside of AWS.

So in this case, Amazon Free does not store the encryption key you provide. So it will absolutely have to use it because it needs to do encryption, but then that key will be discarded. Because you are sending a secret to AWS, you must use HTTPS to transmit the data into AWS. And so you must have encryption in transit. The encryption key must be provided in the HTTP headers for every HTTP request made because it’s going to be discarded every single time. So we have the object and we want to have it encrypted in Amazon Astray, but we want to provide ourselves with the client-side data key to perform the encryption. So we send both these things over HTTP. So it’s an encrypted connection between you, the client, and AmazonFree, and the data key is in the header. So therefore, Amazon has received the exact same object and the client-provided data key. And then again, it is server-side encryption. So Amazon S Three will perform the encryption using these two things and store the encrypted object into your S Three buckets. If you wanted to retrieve that file from Amazon S3 using SSCC, you would need to provide the same client-side data key that was used. So it requires a lot more management on your end because you manage the data keys and Amazon, or AWS in general, does not know which data keys you have used. So it’s a bit more involved. Okay. And then finally, client-side encryption.

As an example, suppose the client requests that you encrypt the object before uploading it to Amazon; some client libraries can assist you with this. For example, the Amazon Straight Encryption Client is a way to perform that client-side encryption. And as I said, clients must encrypt the data before sending it to Astray. And then, if you receive data that is encrypted using client-side encryption (CSE), you are solely responsible for decrypting the data yourself as well. So you need to make sure you have the right key available. So, as I said, in client-side encryption, the customer manages and maintains all the keys and the encryption cycle. So let’s have an example. This time, Amazon S Three is just a bucket. It’s not doing any encryption for us because it’s client-side encryption, not service-side encryption.

And so in the client, we’ll use an encryption SDK. For example, the S-3 encryption SDK will provide the object and our client-side data key. The encryption will happen client-side. So the object is going to be fully encrypted on the client side. Then we’ll simply upload that already encrypted object to Amazon S3. Okay? So those are the four types of encryption. Hopefully, that makes sense. And I’ve been mentioning encryption in transit in this lecture, and I’ll make it very clear what it is that’s around SSL and TLS connections. So Amazon is an HTTP service, and it exposes an HTTP endpoint that is not encrypted and an HTTPS endpoint that is encrypted and provides what’s called encryption in flight, which relies on SSL and TLS certificates. So you’re afraid to use the endpoints you want, but if you use the console, for example, you would be using HTTP. And most clients would, by the way, use the HTTP endpoint by default. And so if you’re using HTTPS, that means that the data transfer between your client and Amazon SRE is going to be fully encrypted. And that’s what’s called encryption in transit. And one thing to know is that if you’re reusing SSEC, so server-side encryption, and the key is provided by your clients, then HTTP is mandatory. Also, as I said, sorry, encryption in flight is also called SSL/TLS because it uses SSL and TLS certificates. So let’s go into the hands-on to see how encryption works.

6. [SAA/DVA] S3 Encryption – Hands On

Now let’s look at the encryption settings. So we’ll go into the coffee JPEG file, and I’m going to scroll down and let’s take a look at the encryption setting. As we can see from the server-side encryption settings, default encryption is currently disabled, and server-side encryption is disabled. So our object is not encrypted. Now we could edit it and encrypt it in place, but I want to show you how it’s done when we upload a file. So let’s go ahead and upload a file. einsteineruploading up to get together with. Now we’ll scroll down, and I will look at additional options for encryption. So let me scroll down. We are getting into the service-side encryption settings. I will click on “enable.” And here we have different kinds of options. So obviously, we disable it.

And this is no server side encryption, which wasa default from before, or we can enable it. So the first encryption type we’ve seen and learned from is SSC s Three.In this case, we are using an Amazon S3 key. And this is an encryption key that Amazon S3 will create, manage, and use for us. So, fairly easy. This is one we could do, and we could just go ahead and create that file, for example, with an Amazon S3 key. So let’s scroll down, upload it, and that’s it. We have uploaded a file with the encryption key for SSC 3. Now let’s do it again, but for the beach apex. So let’s add a file and go for Beach Apex. And I’m going to expand the additional upload options. Scroll down and enable server-side encryption. And the second option is the AWS KMS key. So many miles, In that case, as we’ve seen, we still have an encryption key, but this time that encryption key is protected by the KMS service. And here we have a couple of options. We could use an AWS managed key, which is AWS Three, and this would be an easy option. Or you can choose from your own KMS master’s keys if you wanted to create your own key, which will not do right now. Alternatively, if the key is in another account, we could manually enter the KMS master key ARNAmazon resource name in this field. To keep things simple, we’ll use an AWS managed key, AWS s 3.

And this is going to make sure that the key encryption happens by making API calls into the KMS service. Okay, so let’s upload this file again. Okay, now that we’ve exited this, let’s take a look at what we have. So we have five object versions; we have different coffee JPEGs; we have different beach JPEGs. As a result, we could focus on specific object versions. So if we look at the beach JPEG we just had and the one from before, let’s have a look at what the encryption says. So this is the one I just uploaded. And if we look at the encryption setting, it’s encrypted with AWS KMS, Maxwell keys, so SSE KMS. And if I scroll down to the beach JPEG we had when we first uploaded that file, we can see that there is no server-side encryption. So what this means is that the encryption setting is just for a specific file and its specific version ID. But this will make sense. So we can upload these files manually and specify the encryption settings for each file. Or if we wanted to, we could, for example, go into properties and specify a default encryption mechanism for the bucket. So how do we do it? Well, for example, let’s edit this default encryption setting. So here we go. I will edit it, and we will enable server-side encryption by default. And suppose we want every single object to be uploaded with the Amazon S3 key by default.

So we’ll use this and save the changes. And now let’s try to upload a file without any encryption. So let’s have a look. So we’ll go to objects, upload it, and I’m going to upload a JPEG file. But I’m not specifying any encryption. But as we can see, the default encryption is enabled. Okay, but if I go into additional upload options, I’m saying encryption should use the default encryption bucket settings. And so, as you can expect, if I upload this file, what is going to happen? Well, let’s have a look. I’m going to click on this file ID right here and look at the kind of server-side encryption setting it has. And yes, it has a free Amazon master key. So the default encryption setting worked properly. And so lastly, you may be asking me, “Hey, Stefan, so you taught us about more settings to encrypt files, so why don’t we see them?” So let’s have a look. If I go into the options and look at overriding, as we can see, we have an Amazon S3 key, so SSE 3 or SSD kms. Another one we have learned about is SSEC. And we can only do this through the CLI because we have to pass in an encryption key securely into AWS to encrypt that object. So this is not something that is being developed to be done through the console at this time. So it’s not accessible to us. So the SSEC option is not going to be shown. And the last option I showed you was called client-side encryption. And client-side encryption means that we need to encrypt objects client-side on our own computers before uploading them to Amazon for free. And so Amazon is free. doesn’t really care if it’s encrypted or not; it will just take all the bytes anyway. And so this is why this option does not show anything here either. So that makes sense. Only SSE s three and SSD kms are visible here. So that’s it for this lecture. I hope you liked it, and I will see you in the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!