Amazon AWS Certified Developer Associate Topic: S3 Part 1
December 20, 2022

1. S3 Essentials

So, what exactly is S three? S3 stands for Simple Storage Service, and it offers developers secure, long-lasting, and highly scalable object storage. Amazon S3 is easy to use with a simple web services interface to store and retrieve any amount of data from anywhere on the web. So, basically, what is S three? Well, it’s a place to store your files in the cloud. And actually, the founder of Dropbox used to store all his data on a USB drive, and he’d be going to university or college, and he just found it troublesome. And then, when Amazon released S3, he just designed a simple web page that basically interacted with S3, which allowed him to get rid of his USB drive and store all his data up in S Three. And of course, now he is probably a billionaire, certainly a multimillion millionaire.

And drobox. Still leverages. Three for the storage. They manage all the metadata themselves inside their own data centers, but it still relies on the backbone of S Three. So what do I mean by object-based storage? In terms of storage, there are two types: object-based and blocked base. So an object is simply something like a video, a photograph, or a PDF or Word document. They’re called flat files. So that’s what we mean by object-based storage. And it’s fundamental to understand that S3 is not a place where you would install an operating system or where you’d run a database from.

For that, you need block-based storage. We’re going to cover that in the EC-2 section of the course. So S3 is just the place to store your files in the cloud, and the data is actually spread across multiple devices and multiple facilities. So it’s designed to withstand failure. Three provides excellent service at an affordable price. S 3 is widely used. I actually don’t even bother looking at hard disc size now when I buy laptops because I store all my files up in S3, and you can get these really cool applications like those from CloudburyLabs, which give you a Windows Explorer-style environment, but it’s actually backed off into S Three. So it looks like you’re in an abnormal Windows environment, but where you’re actually storing the files is in S 3. So let’s start with the basics of S 3. So, like I said, it’s object-based. It allows you to store your files.

 It’s not a place where you’re going to install an operating system or run a database from. For that, you need block-based storage. Your files can be anywhere from zero bytes all the way up to five terabytes in size, and there is unlimited storage. And by that, we just mean that Amazon monitors the S-3 capacity around all the different regions in the world. And as and when they need more Saner, they’ll go in and provision them. And then files are stored in buckets. And if you’re new to Amazon, it sounds a bit weird. Like, what the hell is a bucket?

A bucket is just a folder. So think of a bucket as a folder. You put all your files in buckets, and you can have many different buckets within your S3 environment. Now, S3 is a universal namespace, and that just means that names must be unique globally. And we’ll investigate what that entails in the lab. But if you were to create a test bucket, for example, a bucket called Test Bucket, you wouldn’t be able to grab that bucket name because somebody else owns that bucket or that name. So I own a cloud guru, for example. You won’t be able to build the Cloud Guru bucket. And then when you create a bucket, it basically creates this DNS address. So it’s always S Three and then the region. So this is EU West One, which is based in Ireland; amazonaws.com; and forwardslashandthebucket.com.

And that can actually be an exam question. They might give you a whole bunch of different HTTP links, and you have to choose which one would be the correct one for a bucket. So do understand the format of bucket names. Finally, if you upload a file to S3, you will always receive an HTTP 200 code if the upload was successful. And that’s quite important to remember going into the exams as well. So another really important thing to understand before going into any of the exams is the actual data consistency model for S Three.

And if you go and read through the AWS documentation, it’s always very technically accurate, but it can be really, really dry. And that’s why learning online is so much easier, in my opinion, because you can actually have people talk you through it. So the very first thing you need to understand about the data consistency model is this. It’s read-after-write consistency for putting new objects, and then you get eventual consistency for overwriting put and delete operations. What the hell does that actually mean? Well, it’s actually really, really simple.

So when you put a new object in Three, you’re going to get immediate consistency. So you’re going to be able to read that object right away. So if you make a call, let’s say it’s a text file, or you want to read the data in there, you’re going to be able to read it immediately. However, you don’t get that kind of speed when you’re updating or deleting an object. It can take some time to propagate. And if you think about how Three is actually designed, it’s designed across multiple devices in multiple facilities. So, for example, when you update a word file, you might just change the date in it and then do a read immediately after the update. I’m talking about how you could get new data or old data in a matter of milliseconds. It takes a little bit of time to propagate so that it’s completely consistent across all the facilities.

And so updates to S3 are atomic. So what do we mean by “atomic”?Well, basically, you’re either going to get the new data or you’re going to get the old data. What you’re not going to get is partial or corrupted data. You’re going to either get the new version or the old version. So it’s really important to understand those two points going into the exams. When you write a new object to S3, you can immediately read it; however, if you update or delete an object, it may take some time for that to be consistent across facilities or within the S3 bucket. So it’s a really simple concept, but make sure you know that going into your exam. So S-3 is a simple key value store, but what does that mean? Well, basically, S3 is object-based, and objects consist of the following: So you’ve got your key, which is basically the name of the object, and then you’ve got your value, which is simply the data, which is made up of a sequence of bytes. Then you’ve got your version ID, which is really important for versioning, and we’re going to have a whole lab on versioning coming up in this section of the course. Then you have your metadata. If you’re new to it and you don’t know what metadata is, it’s simply data about data.

So metadata might be the date that you uploaded this file or the last time you updated it, for example. Okay, so when we put an object into Three, what are we actually doing? What is it we’re creating? Well, basically, objects consist of the following: we have the key, and this is literally the file name of the object. So what you have to understand is that S Three is designed to be lexographical, or to sort objects in alphabetical order, and this can actually have really important design considerations, and it can certainly be a very popular scenario in many of the different exams, certainly at the professional level. But you’d also have to know this for the developer associate exam. So let’s say you’ve got log data, for example. Log data typically exists as a date and time, and all the file names can look really, really similar, especially if you’re writing a new file every second, for example. And then what will happen is that all the data will actually be physically stored in the same sort of area in S 3, and you can get performance bottlenecks.

So what kind of solution should you find? Well, you can actually just add a random sort of salt at the start of the file name. So instead of it being the date, you might add a random letter or number, and then that ensures that the actual objects will be stored evenly across S Three. So that is a scenario that can arise, and it’s also a really important design consideration. So if you do have file names that are very similar, you might want to consider just adding some randomness to the beginning of the file names. Then we have the value, which is literally the data and is made up of a sequence of bytes. So the value is the data of the object that you’re storing. We then have the version ID. So which version of the object is this? And we’re going to have a lab on versioning coming up later on in this section of the course. And then we have metadata. And if you’re new to it, metadata sounds complicated, but metadata is literally data about data. So when was this file first created? That’s metadata? And then we have sub resources.So sub-resources basically exist underneath an object, so they don’t exist on their own.

And sub-resources consist of two things. They consist of access control lists, and access control lists are exactly what they sound like. So who can access this object? Furthermore, the Access Control List allows for fine-grained permissions. So you can do an access control list on an individual object, on an individual file, in a bucket, or you can do them at the bucket level, and we’ll explore access control lists later on in this section of the course. And then you also have torrents. So S3 does support the BitTorrent protocol. So what else should you know about S Three? Well, it’s built for 99.99% availability, and then Amazon actually gives you an SLA of 99.9% in terms of the availability SLA. For those of you that don’t know, an SLA is just a service level agreement. All infrastructure as a service providers, including cloud providers, always give SLAs on certain services. And then Amazon guarantees 99.99 durability for information on S3, and of course you don’t want to read that out every single time, so it’s just called the Eleven Nine Durability Guarantee. So what do we mean by durability? Well, it just means that you won’t lose a file, so that’s how durable the data is. And then S3 also comes with tiered storage options.

So there are various types of storage options in S Three, which we’ll go over in the next slide. We also have lifecycle management, so we have a lab on this. But basically, if your data is 30 days old, it sits in this particular storage tier; after 30 days, move it to another storage tier; and then, after 90 days, archive it off into Data Archival. S Three also supports versioning, so you can have one object with multiple different versions, and again, we’re going to have a lab on that in this section of the course. You can do encryption on S 3, and there are several different ways of doing encryption, so it’s really important to remember the different methodologies of encryption going into the exam. So we’ll have a lab on that, and then you can secure your data in a couple of different ways. You can use access control lists and you can use bucket policies, and we’re going to have a lab on the differences between those two, and it’s really important to understand that going into the exam as well. So what are the different storage tiers or classes in S Three? Well, currently there’s just the normal S-3, which is called S-3, and that’s the one with 99.99% availability in terms of its design and its eleven-nine durability. And it’s stored across multiple devices in multiple facilities, and it’s designed to actually sustain the loss of two facilities concurrently.

So it is really, really readily available. And then we have S3—infrequently accessed—for data that is accessed less frequently but still requires really rapid access when needed. So it’s a lower fee than SThree, but you’re charged a retrieval fee. So think of an example. Maybe you’ve got payroll data; maybe you’ve got people’s swage slips all inside an S-3 bucket. You don’t actually need to review that until year’s end. But when you do need to review it, you need to be able to do so immediately. You don’t want to wait for three or four hours for it to come out of some kind of data archive system. So S-3 and frequent access are your best bets here because you can basically store it and access it immediately, but it costs a lot less than S-3. And then you have reduced redundancy storage, or RRS. And while this is intended to provide the same availability, the actual durability suffers. So it’s only 99.99%, but it is a lot cheaper than using standard S Three. So what would your use case be for something like this be? Well, maybe it’s data that you can generate again. So you could store all your image files in one bucket, but the thumbnails in another bucket. And with the thumbnails, you could use RRS so that if a file goes missing, you can just regenerate the thumbnail, which will save you some money on storage. And then finally, we have Glacier.

So Glacier is very, very cheap, but it’s used for data archival only, and it takes three to five hours to restore data from it. And that’s something that’s going to come up constantly in your exam. So you might have all these different storage options and you want to save the most money, but it will come down to retrieval time. Do you need to be able to retrieve an object within a few milliseconds, or do you need to be able to retrieve an object within a few hours or even days? Then there are the financial considerations to consider. So are you going to be frequently retrieving this object, or are you only going to retrieve this object every so often? So you’re going to get all these different scenario questions, and we’ll be asking you to architect the best mechanism for storing your data in S 3. And we’re going to have lots of practise quiz questions on that as well. So here’s a nice little table that basically sums up the different options. In our S3 storage classes, we have standard, infrequently accessed, and then reduced redundancy storage. The durability for standard and standard infrequent access is the same, but it reduces redundancy storage. The availability slightly drops for standard, infrequently accessed storage but remains the same for both standard and RRS. And then there is concurrent facility fault tolerance.

 So how many facilities can we lose and still have S3 online? Well, for standard, we can lose up to two. Same with standard, infrequently accessed storage, but with reduced redundancy storage, we can only lose one facility. Everything supports SSL, except for the first bite latency. So how quickly can you access your data? Well, it’s going to be within milliseconds, and then all our storage classes will support lifecycle management policies. Now I’m going to come back to Glacier. Glacier is its own independent service. It integrates very tightly with S3, but it has its own icon and landing page in the console. So Glacier is for data archival. It’s extremely cheap, and basically, it stores data for as little as one cent per gigabyte. And it’s optimised for data that is infrequently accessed, for which retrieval times of three to five hours are suitable.

So, comparing Glacier with standard and standard infrequent access, I’m not sure why RRS isn’t on this list, but it isn’t. So we have the same durability; we have slightly different availability for IA, which we explored in the last couple of slides. The availability SLA—they don’t actually give you an SLA for Glacier. The most important thing to remember here is that it will take you a few hours to overcome your first bite latency. And by first-byte latency, we simply mean, you know, how long is it going to take you to retrieve that object? So, aside from the charges, we now know a lot about S Three. So what do you charge for on S Three? Well, you charge for the following: You charge for the storage. So how much data are you storing on S three? You charge for the number of requests. So this is the number of times or the number of requests that are being made for objects within your S-3 buckets. Then there’s storage management pricing. This is relatively new, but when you store data in S3, you can add a whole bunch of tags to it. and this allows you to control costs. So you can know that this data is associated with human resources, for example, or this data is associated with your developers, and it allows you to track your costs against S Three. As a result, they charge per tag. And then we have data transfer pricing.

So data entering S3 is free, but moving data within S3 may require redoing replication from one region to another, which we will recover in a separate lab, but you will be charged for that replication. And then we have transfer acceleration. What is transfer acceleration, I hear you ask? Because we haven’t actually covered that yet. Well, Amazon S Three transfer acceleration enables fast, easy, and secure transfers of files over long distances between your end users. and a three-bucket haul. Transfer acceleration takes advantage of Amazon’s cloud-front, globally distributed edge locations. And the data arrives at an edge location and is then routed to S3 over an optimised network path. So let’s see what that actually looks like. So we’ve got our users; they’re all over the world. They might be in South Africa, in South America, over in Japan, or in New Zealand. But my actual bucket location is inside London because that’s where I configured it when I first set up my bucket. So instead of the users uploading directly to the bucket, what they’re actually doing is uploading to our edge locations. Our edge locations are much, much closer to the user, and then we have much more superior networking between our edge locations and our AWS region in London. So you can actually accelerate the upload speed of your files. And if you type “Amazon S3 transfer acceleration” into Google, you’ll be able to find this tool, which actually shows you how much faster it is when you turn transfer acceleration on.

So if I go to places that are further away from me, places like Tokyo, for example, or Sydney, or Singapore, I can see that it’s going to be 37% faster if I have transfer acceleration turned on. Okay, so congratulations We’re coming towards the end of this lecture. You’ve learned an awful lot. What are my exam tips for S Three? Remember that S3 is object-oriented. You can’t install operating systems or databases on it. The performance would just be absolutely terrible. You only want to install files—things like word files, videos, pictures, et cetera. Files can be anywhere from zero bytes up to five terabytes in size. There’s unlimited storage, and files are stored in buckets. Buckets are just a fancy name for a folder, and the S3 has a universal namespace, so that means that the bucket names must be unique globally. And in the next lab, we’ll show you how to create a bucket, and we’ll try and take a test bucket as an example. So in terms of what the actual link looks like to your S-3 buckets, it always looks like this: So it’s S Three, followed by a hyphen, then Amazonaws.com, and finally the bucket name.

Do definitely remember that format going into the exam. Remember the consistency models you have read for after-write consistency for puts of new objects. So when you create an object in S3, you are able to read it immediately. But you only have eventual consistency for override puts and deletes. As a result, if you update or delete an object from S3, it may not appear as deleted, or the update may not appear immediately. It takes some time for this replication to happen. So then we have different storage classes within S Three. We’ve got the normal S3, which is durable, immediately available, and good for frequently accessed storage. S3 IA infrequently accessed data is long-lasting; it’s always available, but it’s best for infrequently accessed data. We then have reduced redundancy in storage. So this is used for data that is easily reproducible, such as thumbnails, for example. And then we have glaciers. This is for archival data, which you are able to wait between three and five hours before accessing. You should remember the core fundamentals of what actually makes up an S-shaped object. So you always have the key. This is the name. Remembered names are lexographical, so they’re stored in alphabetical order across the S3 facilities. You then have the value, which is the data itself. Then you have the version of the object. So which version isn’t? We’re going to have a lab on versioning coming up.

We then have metadata. Metadata is just data about data, and we then have the sub-resources that exist underneath this object, which will be the access control list. So who can actually access this data? And then we have support for the BitTorrent protocol. So some further tips, and I keep saying this over and over again, but it’s object-based storage only. It’s only via files. Do not install an operating system on it. Do not install databases on it. You could use block-based storage for that, and we’ll cover that in the EC-2 section of the course. When you successfully upload an object to S3, it always returns an HTTP 200 status code indicating that it was successfully uploaded. My final tip before you set the exam is to make sure you read the S Three FAQ before doing the exam. S3 is a fundamental service provided by AWS, and it will appear frequently in your exams. Okay, that’s it for this lecture, guys. If you have any questions, please let me know. If not, feel free to move on to the next lecture, where we start getting our hands dirty and it becomes a lot more fun because we start interacting with S Three. in the console. 

2. Creating An S3 Bucket Using The Console

Okay, so here I am in the AWS console. If you notice up here, I’m currently in Northern Virginia. There are a whole bunch of different regions that I can choose from. And if I go up to services, I will be able to see S-3 under storage. Now, S Three is one of the oldest services that AWS has, and for that very reason, it is featured very heavily in both the Developer Associate Exam and the Solutions Architect Associate Exam. So we click into Screen 3 and we get this great little splash screen. Now, notice up here that it’s changed my region to global. So you manage your S3 buckets at a global level. So you could have an S-3 bucket in London, a Northern Virginia S-3 bucket, or a Sydney S-3 bucket, but you’ll be able to see them all in one place. You don’t need to change your regions, which makes it a lot easier to use. So what we want to do now is go ahead and create a bucket. Now, your bucket name has to be unique. So if I type something in like “Test Bucket,” that should probably be, you know, in use by somebody else; let’s go ahead and hit next. There you go. Bucket name already exists, and the reason for this is that it’s using a DNS namespace. So you’ll be able to access your buckets via a DNS address. And the DNS address is simply a web address. So you have a web address for your buckets. So the test bucket isn’t going to work. People keep stealing the “A Cloud Guru” bucket name. So I’m going to try a cloud guru. Let’s try 2017 and see if that is available.

No, stop stealing my buckets, guys. So I’ll do a “cloud guru 2017.” Ryan, now that we’re here, we can change our region. So where do you want your bucket to be placed? I’m based out of London right now, so I’m going to click in to put it in London. And you can copy your settings from an existing bucket. Now, as you can see up here, there are all these different phases that you can go through. You could just hit Create and try to create the bucket directly. But I want to go ahead and set properties, permissions, and then reviews. So go ahead and hit “next.” So here we have versioning. We’re going to cover that in the next lecture of this course. And here we’ve got logging. So set up access logs, records that provide details about access requests, so you can log who does what to your bucket. And here we’ve got tags. So you can use tags to track your costs against different projects or criteria. So, in tags, you could click in here and add a tag. And we could say something like “owner.” And this is the finance team. We’ll go ahead and hit save. And you can have quite a few tags in there as well. So go ahead and hit “next.” And here we can manage our users. Now here, I’m actually logged in as the root account. I’ve been a little bit naughty.

 So a cloud guru named Alexa This is our Alexa account that we use for the Free Alexa Course. And I don’t have any other user IDs on this account. You can now add access to other AWS accounts. So these are your current AWS, the current account you’re using. So if you’ve got multiple users in there, you’d add them here. Here is where you can grant access to other AWS accounts. And lots of companies will have multiple AWS accounts. You might have one for your test and development teams. Then you might have a separate one for production. So you can add additional accounts here. In here, you can manage public permissions. Do not grant the public read access to this bucket. Recommended: now, by default, all buckets are private. You could also grant public read access to this bucket by clicking here. But I’m going to leave it as the default, and we’ll go through and look at how we can make our buckets readable later on, or objects within our buckets. In here.

Manage system Permissions are denied, so do not grant the S Threelog delivery group write access to this bucket. This is basically when you’re using it for logging. By default, it does not grant three-logged delivery access to this bucket. Go ahead and hit “next.” And in here, you can just review all of the changes that you’ve made. So we can see my bucket name is “A Cloud Guru.” 2017 Ryan. The region I’m deploying it to is London. Versioning is disabled. Logging is disabled. Tagging is enabled. I’ve put one tag in there. Permissions. Only one user has access to this bucket. And that’s my root account. Then public permissions and system permissions were disabled. So go ahead and make the bucket, and then we’ll proceed. We’ve got our bucket. And as you add more buckets here, you’ll see them fill up. So we’ve got a cloud guru. 2017. Ryan. Click in here. Let’s upload some files to our bucket. So go ahead and upload. I’m going to add two files. Let’s go over to my downloads directory. So we’ve got the A Cloud Guru logo, and we’ve also got some Alexa skills. That is, if you’re interested in the Cloud Guru platform, we offer a free Alexa course. So go ahead and hit “open.” Now. It’s ready to upload. You proceed to press the upload button. And that’s it. You can see it uploading. Down here, you’ve got your operations. We’ve got one in progress. We’ve had zero successes at this moment and zero that have had an error. And there you have it, one success. So the reason it says “one success” instead of “two successes” is because it was all done in one upload. So it’s one successful upload. If you did this via the command line, your browser would display an HTTP 200 code indicating that the upload was successful. It’s really important to remember that going into your exams.

So straight away I’ve got my two objects, and these are just files sitting inside my buckets. So by clicking the object, we bring up all the different overviews, properties, and permissions of the object. And you can see that right away. I’ve got a link to the object. And if I were to click on this link, I would get this XML file, and it basically says error code: access denied. I don’t know if you’ve ever seen The Lawnmower Man, but they have this scene in it where he keeps saying “access denied,” “access denied,” over and over again. So it kind of reminds me of that. And so you get this because by default Your buckets are private, and all your objects inside your buckets are going to be private as well. So if we go back to our bucket and click on our object, how can we make this object public? Well, you can actually just go up here and go to More, and you’ll see there’s a thing there that says “make public,” and you can click on that, and that will make the object public. You get this little warning message here that says everyone will have access to one or all of the following: read and write permissions for this object.

So go ahead and you can click “make public.” If you click it again, you should see the A Cloud Guru logo. I’m just going to go back in my browser, and we will be able to click on the bucket again, and then we can go in and have a look at the permissions on this particular object. We can see this if we scroll down and click on permissions. Under permissions, we’ve got three different types of permissions. They have owner access. So what is the owner of this account allowed to access within this bucket? So you can see my account here, and then I have the ability to read the object, read the object’s permissions, as well as write access to the object for other AWS accounts. So I could click in here and add another AWS account and give them permission to access this object as well as public access. So we have a group that says everyone is allowed to read the object, but not to read the object’s permission and then write object permissions to it. Up here, we can click on properties. So right here we’ve got the object stored in our storage class, which is standard. If I click in here, I can change this. So I could say standard, infrequently accessed, or reduced redundancy storage.

So this is where you use it to store non-critical, reproducible data at lower levels of redundancy. But then you also save on costs. Infrequently accessed means that you will not be accessing this object very frequently. So you still have a lower storage cost, but the same level of redundancy as standard. So I’m going to leave it as is and just hit cancel. In here. We can also look at encryption, so we can go ahead and encrypt this object. This is service-side encryption. So basically, if I was inside AWS’s data centres and I was trying to look at this object outside the AWS account, so let’s say I was looking at the actual physical disk, this object would be encrypted, so I wouldn’t be able to read it. So you can go ahead and turn that on for this object if you want. I’m going to save it there. So it’s using AES 256 encryption. In here, we’ve got metadata, so we created some metadata around it. So we’ve got the content type, and it’s an image PNG. You can add in your own metadata here. So everything is like a jumble of different things. To be honest, this is completely beyond the scope of the Solutions Architect Associate and Developer Associate courses. But you can go through here and add different metadata, perhaps content language, and you want to say English, something like that. Just go ahead and hit cancel. And then, in here, we’ve got our tags for the object. So we can go ahead and tag this object.

Now, you remember, we’ve actually tagged the bucket, but the objects within the bucket didn’t inherit the bucket’s tag. So we could go in here and add our tag, and we could say, “Department, and this is owned by the finance team,” “finance team,” or something like that. So you can tag all the way down to individual objects, but do remember that individual objects do not inherit the bucket tags. We click over here, we go back to overview, and we’ll be able to see who owns this object, when it was last modified, and what storage class it’s in. We’ve just changed the server-side encryption and the size of the object, and the minimum size is zero bytes. That can be a popular exam topic. and then the link here below. So I’d like to emphasise that we’ve been working with this specific object, our A cloud Guru logo PNG. We’ve been playing with the encryption options, the tags, et cetera, just on this object. We haven’t been doing it at bucket-level, though. So let’s go ahead and click in here, and let’s go ahead and change some of the properties of our bucket. And as you can see, it’s only up here that you can do it. So we can go over to properties, and here we’ll see a whole bunch of different things. So we’ve got versioning, and we’re going to COVID in the next lab. We’ve got logging; we’ve got static website hosting. We’ve got our tag, so we can already see our tag.In here, we’ve got Transfer Acceleration, which basically makes it much easier to transfer very large files up into S 3. And I’ll have a lecture on that coming up as well. We have events, and we have requesters. pays in here. So by default, basically everything is disabled or turned off, and these are things that you can go ahead and turn on. In here, we’ve got our permissions.

So right now, our owner, Access, is the only one who has access to this bucket. We could change the permissions for everyone in here. So we could say “read bucket permissions” or “write bucket permissions.” We could enable the ability to list or write objects within this bucket. I’m not going to do that. I want to keep it locked down. And essentially, there are two ways to control access to your AWS buckets. We’ve got the access control list, or bucket ACLs, and then we also have bucket policies, and you can create bucket policies. Here, you can go ahead and click Policy Generator. This will generate a policy for your bucket. So you could say choose by policy type. So we have different policy types. In this section, I’m going to select S Three Bucket Policy, and in this section, we’re going to add our statements so that we can say Allow or Deny our principal. So Principal could basically be an Amazon resource name. So here you go. One of the people who received the information in this statement could be Joe, an IAM user. And the way you would do that is you would go and get the Amazon resource name for Joe, or you could just say “Star,” which is everyone. And then we’ll talk about the AWS service.

So what is it that we’re allowing? So we’re saying that everyone has access to the AWS service; three, what actions can they take after being selected? So we’ll say they can create buckets, list buckets, and delete buckets, or you can just give them full access. So simply click in here for all actions, and they will be able to do whatever they want. And then, in here, your ARN. So this is the ARN for the bucket that you’re using. So I’m not going to do this. I’m going to come back out. This is a little bit beyond the scope of both the Solutions Architect Associate and the Developer Associate courses. However, if you are interested in creating your own “Three Bucket” policy, we do have an “S Three” masterclass on the Akile Guru platform that explores the concept in a lot of detail. I’m just going to go ahead and go back. So here I am, back in my S3 window. In here, we’ve got the Cause configuration. We’re going to cover that in another lecture. In here. We’ve got management. So we’ve got lifecycle management. We’ve got replication, so we can actually replicate this bucket and all its objects to another region. In here, we’ve got analytics. In here, we’ve got metrics. And in here, we’ve got inventory. And this is how you are.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!