Pass Amazon AWS Certified Solutions Architect - Professional Certification Exams in First Attempt Easily
Latest Amazon AWS Certified Solutions Architect - Professional Certification Exam Dumps, Practice Test Questions
Accurate & Verified Answers As Experienced in the Actual Test!
Download Free Amazon AWS Certified Solutions Architect - Professional Practice Test, AWS Certified Solutions Architect - Professional Exam Dumps Questions
File Name | Size | Downloads | |
---|---|---|---|
amazon |
3.4 MB | 1771 | Download |
amazon |
3.5 MB | 1189 | Download |
amazon |
3.9 MB | 1085 | Download |
amazon |
2.2 MB | 1137 | Download |
amazon |
2.2 MB | 1247 | Download |
amazon |
2.8 MB | 1382 | Download |
amazon |
2.9 MB | 1743 | Download |
amazon |
2.4 MB | 1914 | Download |
amazon |
2.3 MB | 1762 | Download |
amazon |
1.5 MB | 1804 | Download |
amazon |
2 MB | 1760 | Download |
amazon |
2.1 MB | 1976 | Download |
amazon |
1.6 MB | 1866 | Download |
Free VCE files for Amazon AWS Certified Solutions Architect - Professional certification practice test questions and answers are uploaded by real users who have taken the exam recently. Sign up today to download the latest Amazon AWS Certified Solutions Architect - Professional certification exam dumps.
Comments
Amazon AWS Certified Solutions Architect - Professional Certification Practice Test Questions, Amazon AWS Certified Solutions Architect - Professional Exam Dumps
Want to prepare by using Amazon AWS Certified Solutions Architect - Professional certification exam dumps. 100% actual Amazon AWS Certified Solutions Architect - Professional practice test questions and answers, study guide and training course from Exam-Labs provide a complete solution to pass. Amazon AWS Certified Solutions Architect - Professional exam dumps questions and answers in VCE Format make it convenient to experience the actual test before you take the real exam. Pass with Amazon AWS Certified Solutions Architect - Professional certification practice test questions and answers with Exam-Labs VCE files.
New Domain 1 - Design for Organizational Complexity
5. Creating first AWS Organization & SCP
Hey everyone and welcome back. In today's video, we will look into how we can implement an AWS organization and its associated features. So in order to implement it, as we have discussed, there will be two accounts that are needed. So this account, which is basically running in my FireFox, will consider this as a master account. And I have one more account, which is running in Google Chrome, and I will consider this a child account. So I'll go to the support centre so that we'll know the account number in the child account. So from the master account, the first thing that you will need is to click on AWS Organizations, and this will take you to the organisation console. Now within here, you can go ahead and click on Create Organization, and there are two features that you will get. One is enabling consolidated billing only, and the second is enabling all the features. So if you just want to have consolidated billing, this is the feature that you would select. If you want all the features, which would include consolidated billing, it would also include policy-based control. Then you should enable all features. So this is the feature that we will be enabling right now. So once I select this, I'll go ahead and click on Create Organization. Perfect. So once the organisation is created, you would seethat this is the default account which gets added. So this is basically my current account. So we'll consider this as the master account. Now, for the account that we want to enable consolidated billing or even policy-based features, we need to add them with the Add account button. Now, within this, there will be two options that will be available. One is the Invite account andsecond is Create an account. Since we already have an AWS account that's been created, I'll go ahead and click on "Invite account." Now within this, you have to put the account ID of the AWS account that you want to invite as part of the organization. Now I already have the account ID. Now one important part to remember is that if your AWS organisation is newly created, it takes a certain amount of time for it to be initialized. So generally, the documentation states that you should at least wait for an hour. I have seen a lot of people who have actually waited for more than 24 hours before having to contact support. There were certain issues related to the initialization part of AWS organizations. So let's just try it out. If I am able to invite immediately and you notice that it is essentially giving an error saying that you cannot add accounts to your organisation while it is initializing, please try again later. So we'll have to wait for a certain amount of time. What I'll do is try and wait for an hour, and then I'll pause the video for now, and we'll resume it after a certain amount of time. So it has been around four to five hours, and I already recorded four to five more videos, which we are rerecording for this specific step. So now I have the account and I have clicked on "invite." So do remember that initialization will take a lot of time. All the documentation states that it will take around 1 hour. In reality, it takes a lot of time, but that is just one time that is needed. Anyway, once you've sent the invitation, I'll open my Chrome browser and navigate to AWS organizations, where you'll notice that I only have one invitation. This invitation is based on "Enable all features." So if it is genuine, you can go ahead and click on Accept, and I'll select it is genuine, ySo once you have confirmed, it will basically show you the organisation ID. And from your Master account, if you refresh the page, there should be one more account that should be visible. So this is the child account that is visible within the Master Organization page. Now, if you go to the child account and then to the billing dashboard, and then to the consolidated billing, it should say that your account is now a member of an organization. So basically the consolidated billing has been enabled, and the second thing that we were speaking about is the policies. So we had discussed how we can control the permissions of a child account even if a root user is logged in into the child account.So let's go ahead and try that out. So from the master account, I'll click on Policies, and by default there is a full AWS access policy that is created. So let's go ahead and create a custom policy. The policy name would be S Three Denial, and the overall effect that we would like is the denial. Now the service that we would be looking forward to is Amazon S3, and the actions are "select all" and "click on add statement." So this would basically deny all access. So I'll just deny all S-3 operations to make it clear, then go ahead and click "Create a policy." So now the policy has been created. So, if you go back to the accounts and simply click on one of them, it basically says that in order to attach a policy, you must first enable that type of policy in the root. So what you'll have to do, you'll have to goto organise Accounts enable the service control policies perfect. So now the service control policies have been enabled. So, now that you're back in accounts, you'll notice that there are two policies. One is full AWS access. So this is the default policy, and the second is the S Three design. So before we actually try it out, Before we attach this policy, let's check whether we are actually able to access the S-Third service. So I'll go to my child account, then to S Three, and as usual, I can see all of the S Three buckets. So basically, I'm logged in as a root account, so I'll have full permission. So now, within the organization, I'll attach this S-three Three deny.So, once this S-3 deny policy is attached, access to the S-3 is completely denied. So in order to try that out, let me open up S Three again. And now you see, it is saying that the error is "access denied." So this error is actually occurring, even though I am a root. So even a root user in the child accountwill not be able to perform the operation ifthere is a policy attached through the AWS organization. So this is a pretty interesting service. I hope you understood the power that the AWS organisation has. You can do a lot of things, like have policies that would disallow anyone from disabling CloudTrail and various others that will help you maintain overall security within the AWS environment. So this is it. About the practical lecture on AWS organisation I hope you found this video useful, and I sincerely hope you will implement AWS organisation in your environment. So this is it. I hope this has been informative for you, and I look forward to seeing your next video.
6. Overview of AWS License Manager
Hey everyone and welcome back. In today's video, we will be discussing the AWS License Manager. In enterprises today, managing licenses can become quite a hassle. Now, I can give you an R1 World Use Case wherein, at one of the organizations that I have been working with, we had more than 80 AWS accounts. And it really becomes difficult because when you purchase license, it is so easy to launch anew service and you will not even know whether you are over committing to a specific license that you have brought for your organization. And during the audit time, if the organization that is auditing you realizes that you are overusing your licenses, then that would lead to a huge fight. Now, typically speaking, licenses are associated with various levels. You can have an operating system level license, such as Windows or Red Hat, and a database level license, such as Oracle DB or Microsoft SQL. For that, you can have application-level licenses like SAP or even various third-party licenses. Now, typically discussing about the challenges which enterprise might face. One thing that you already discussed is that in Cloud ways, you can launch new servers with just a click of a button, and this really becomes the primary phase for the overcommitting of various licenses. Now, the second important point is that license violations detected during the audit can lead to heavy penalties. And since let's say your organization has more than 50 AWS accounts and each AWS account has so many regions, it really becomes difficult to track the license usage across multiple accounts. And this is the reason why the AWS License Manager service really helps you. Now, a Double License Manager is a service that allows us to manage licenses from a wide variety of software vendors across AWS and on premise. The great thing about License Manager is that we can enforce policies for licenses based on various factors like CPU sockets, and that will in turn control the number of EC2 instances that can be launched. So let's understand this with the demo, which will help us remember. So I'm in my AWS License Manager, and currently I have one license configuration named College Demo Application. The status is active, the license type, this is of type V CPU. And if you see over here, the license consumption status is one out of one. So if I go into the dashboard, it basically tells you that you have one license configured and that one license is already consumed. And here it says that there is enforcement that is done. That basically means that any easy instance that you try to launch, if it is part of the licensing configuration, then you will not be able to do it because you have already utilized 100% of your license consumption. So let's try it out. Let me go to the EC2 instance, where I have one EC2 instance running. So this is one easy to instance running. Now, let's do one thing. Let's say that this is an AMI of some expensive software that our enterprises use and have purchased the license for. Now, let's try to launch an instance from this point. I'll do a review and launch directly. And let me click on "Launch." Now, while launching, you see, it gives an error of launch failure, and it says that the license configuration and the license account limit have been exceeded. And this is basically a great feature because it allows you to enforce the licensing policies within your organization. Now, one great feature about License Manager is that you can integrate the SNS, and you can also integrate the AWS organization so that you can have a cross-account resource discovery because it might happen that, typically, NWS has more than ten AWS accounts, like 50 or 100. And if so, then it really becomes difficult to go through the License Manager Dashboard of each and every account. So, with the help of Cross Account Resource Discovery, tracking license usage across your organization becomes much easier.
7. License Manager – Practical
Hey, everyone, and welcome back. Now, in the earlier video, we discussed the basics of License Manager and how it can be used to enforce the policies. However, we had not done the practical So let's go ahead and do the practical and look into how exactly this can be achieved. So I'll go to my AWS License Manager in a different region. Now, the first thing that you need to do is create a license configuration. So let's click on this button create license configuration." So you can name your license here. I'll say expensive software. You can use a legitimate name such as Microsoft SQL Server, Red Hat, or whatever license is associated with. Now, the license type here you have three types which are available. One is vCPU, second is Cores, and third is Sockets. Now, many licenses can be associated with codes. Many things can be associated with sockets. So this really allows you to have granular control. For our testing purposes, I'll say vCPU. Here you can say the number of vCPUs. I'll say one. That means we only have one license available, which has the maximum of one vCPU that can be used. So if you have a license that allows 100 vCPUs, you can just put it as 100 over here. But for simplicity, I'll put it as one. and here you can enforce the limit. When you enforce the limit, you can ensure that new EC2 instances that are using the specific licenses cannot be launched. Once you have done that, you can go ahead and click on Submit. Great. So once you have submitted, let's click on here, and you have to basically associate a specific resource. So we need to associate an AMI over here. So let's do one thing. What I'll do is I'll create an EC two instance. Let me click on the EC2 instance here. Let's quickly create an EC II instance. I'll review and launch it, and I'll launch launch it,So our EC has two instances running; let's go to Actions and we'll quickly create an image. Let's call it an image name. I'll say expensive image name.So the AMI's creation is in process. Let's quickly wait here. Great. So the EMI has now been created. So basically, what you need to do is associate this AMI with the License Manager. So one example that I can share is: Let's assume that you have purchased the licence for Red Hat. Now Red Hat will have an AMI. So you can associate that specific AMI ID for Red Hat with your licence manager. So let's see how we can do that. Under Associate AMI, you can click on Associate AMI, and your AMI automatically comes up. I'll just select this AMI and I'll click on "Associate." Basically, we had allocated the licence consumption of one vCPU. So now let's try it out. Let's try to launch an instance from our AMI; you will be able to launch one instance here. However, if you try to launch an instance that has two vCPUs, let's try it out. And here you see that the licence limit has been reached because this instance is trying to launch a server that has two vCPUs, which we do not have the licence for. So let's go back to the review screen. I'll go to configure instances. Let's go back once more and try to launch an instance that has just one vCPU. I'll go ahead and do a review and launch it perfectly. So with one vCPU, your instance was successfully launched. And now, if we return to the licence manager and look at the dashboard, we can see that you only have one licence that is consumed and enforced. Now, do remember that this might take a little amount of time, close to three to four minutes, for this to get updated. So I just wanted to share you with you here. So this is a high-level overview of how you can create your own licence configuration. I hope this video was informative for you, and I look forward to seeing the next video.
8. Centralized Logging Architecture
Hey everyone and welcome back. In today's video, we'll be discussing the centralised logging in AWS. Along with that, we'll also look into some of the important pointers that we need to consider before we go ahead and implement the centralised logging approach. Now, similar to what we discussed in the multiaccount strategy video, a comprehensive log management and analysis strategy is a mission-critical pointer in any organization. Now, typically, if you talk about AWS, you have a lot of services like cloud trail configuration, VPC flow logs, and you'll also have logs from EC2 instances, et cetera. So it is very important that you forward the logs to a centralised account in a bucket, and from this bucket you can go ahead and analyse the log using log monitoring solutions like Splunk or various others. Now, there are certain considerations that you need to look into while implementing centralised logging. The first and very important consideration is the log retention requirements. And this typically becomes very important if your organisation is following some kind of compliance model. So a lot of compliances, like PCI, DSS, and various others, have a specific log retention requirement. So let's assume that the log retention requirement is five years. So you need to make sure that if you are storing the logs in a centralised SV bucket, you need to make sure that the logs will be retained for five years and are not automatically deleted. So that is very important. Now, along with that, lifecycle policy is also quite crucial, specifically if you are taking into consideration the cost. So many times, organizations store application logs, typically for mission-critical servers. So application log can go up to terabytes. Now that you don't need to store terabytes of login data in three buckets, you can move them to Glacier as well, saving you a lot of money. So this is where the lifecycle policies really play an important role. Now, the second important point is that you need to incorporate the tools and features to automate the lifecycle policies. So let's say that for up to six months, all the application logs will be stored in SQL for easy retrieval. If there are any issues or anomalies that are detected after six months, all the older logs will automatically be migrated to Glacier. So those things should be automated. Now, the third important point is the automation of the installation and configuration of log shipping agents. Now, this is also very important. Now, let's say that you have EC2 instances in an auto-scaling environment, and a new EC2 instance starts in the middle of the night. Now, you need to make sure that the new instance, which was launched in the middle of the night, and that instance's application log or that instance's server log are also sent to the S3 bucket. Now, if you want to achieve that, then you need to make sure that the log shipping agent is automatically installed. Now, depending upon the use case, you can add it as part of the user data or add it as part of the AMI itself. So this is an important consideration because if you have auto scaling enabled, it is possible that one instance will be launched in the middle of the night due to scaling. and once the CPU load decreased, that instance got terminated. Now, if you do not have the automated way of installing the log shipping agent, then the logs that were part of that new instance and got lost would be lost. And the last one is to make sure that whatever solution you implement supports the hybrid environment. Now, again, this is a consideration. Many of the organizations do not have a hybrid environment. But nowadays, a lot of organizations are moving toward hybrid IT. Now, hybrid can mean the cloud as well as on-premises, or it can mean hybrid cloud architectures as well. So you can have AWS and Azure. So whatever solutions you implement, make sure they support hybrid environments as well. Now that we have the AWS certification, we need to understand how we can make use of AWS managed services to build the centralised logging solution. Now, there are a lot of services that can help. You have elasticsearch; you have cloud watch logs; you have kindnesses; you even have S 3. One thing to keep in mind is that the way you can configure the centralised logging will differ depending on the AWS services you use. So the way in which you can configure the centralised logging for cloud trail would be different from the way in which you can configure the centralised logging for VPC flow log. So that is one important part to remember. So before we conclude, let me show you a quick demo on what exactly this type of architecture of centralised logging might look like. So this is my account: A. Now, let's say that we call this account the centralised logging account. And this account has various buckets. The first one is Kplab central cloud traildemo, and second is Kplip central config demo. Now, what you can do here is, if you look into this diagram, those two buckets are present in this central account. Now, whatever accounts you have, you can have Account A, Account B, Account C, et cetera. You can configure various services in those accounts to send the logs to this specific Three buckets, which are part of the central account, All right? So let me quickly show you. Here I have the Kplab Central cloud trail demo. Now, if I quickly open this up here, you can see it is receiving the cloud trail logs from a different account. So if I quickly open this up here, these are the cloud trail logs, which are coming from different account together.So this specific storage bucket that you see overhere is part of a different account, and the logs that are present over here are coming from the services in a different account. So let me quickly show you. So I'm logged into account B, and if I go to the trace over here, so if you just want to verify and this account has the account number of, and if you look into the SCU, so these are logs from another account. Now, the way in which you can configure the logs is, let's say I want to have centralised logging for all the cloud trail logs. So the way in which you configure each and every service for centralised logging will be different. Now, here is a cloud trail that has been created. Now, if you look over here, I have a defined bucket, which is Kpop central cloud trail demo, and this bucket belongs to a centralised account, as well as a bucket policy that is created for this specific bucket, so that the account a cloud trail service will be able to deliver to this SRE bucket. So there is a bucket policy, which is created here, and this is how the bucket policy looks like.Now, as you can see in this bucket policy, we are specifying the cloudtrain.amazonaws.com service to be able to have a get bucket ACL as well as the put object permission over my three buckets. So this is for the cloud trail. Now, similar to this, I also have a centralised bucket for config logs. So if I can quickly show you, this is for the config log. Again, this is the account ID. And if I can quickly show you this, let's go to the configuration service, and within the configuration console, if I click on settings here, I have also specified the bucket name as Kplatz central config demo. So this is the bucket, and even for this specific bucket I have the bucket policy, which is configured. So, instead of cloudrail, we are now specifying the configuration services over here. So we have the get bucket ACL and the put object, which are also defined for this bucket. So this is a high-level overview of centralised logging. I hope this video has been informative for you, and I look forward to seeing you in the next video.
9. Cross-Account Logging for CloudTrail and Config
Hey everyone and welcome back. Now in today's video, we'll look into how we can configure the cloud trail as well as the configuration to forward the logs to a S bucket that belongs to a central account. Now, configuration is fairly simple and straightforward, so let's get started. The first step is to create an S3 bucket in a centralised account. So I have an account, you see; the name is KP Labs, and we'll consider this a centralised account. The first thing that we'll do is create a bucket. I'll call it that. Kpops central cloud trade Make sure you give them a naming convention so that it is easier for us to understand. Let's create a bucket in the North Virginia region. Once you have done that, let's go ahead and do a Create.Now, similar to that, we'll create one more bucket. This bucket would be for the config lock. So I'll call it Kplan Central Config. We'll select the region as North Virginia, and we'll go ahead and click on Create. GreatSo both of our buckets are now created. So now I am logged into a different AWS account. The name is Development here, and I am in my CloudTrail console. So, if you go to the Cloud Trail console, it should look something like this. You need to go to the Trails tab and click on "Create Trail." So let me give it a name. I'll go with the kplash account for the Mumbai area. You can now specify whether the trail should be applied to all regions or simply "no." Let me do it as no for the time being, and you need to specify the storage location. Now by default, it will create a new SD bucket. It's yes. I'll put it as no, and then you have to specify the bucket name. I'll call it the Kplabs central cloud rate. Once you have done that, you can go ahead and do a Create.But before you do a create, you need to make sure that the S bucket is empty. So this S-3 bucket needs to have a proper policy so that it allows the CloudTrail service to store the logs over here. So let's go to the S-three console yet again. So this is my S-three bucket. So what you need to do is you have to go tothe permissions and from there you have to click on Bucket policy. So here, you have to put a bucket policy in place. I already have this policy available. I'll be putting it after this video so that you can refer to it. Now, within this bucket policy, you need to change the ARNs that are available over here. So the ARN of this bucket is that you can get it from here. Let me copy this ARN here, and I'll replace the ANN. Let me replace this as well. Make sure you have the Arnstar for this object. This is extremely important, otherwise things will not work. So this is something that you need tomake sure and click on Save great. So basically, what this bucket policy does is, let me try to maximise the screen. Now, if you look, there are two actions that are defined. One is the Get bucket ACL, and the second is the Put object ACL. Now, this object is basically from the service. So if you look into the principle, the principle is service, which is CloudTrail at Amazon Web Services (AWS). Now this will work because this is the DNS service DNS.However, let's say that if you have an EC2 instance in account A, and that EC2 instance wants to store the data in a centralised SV bucket, then you have to specify the account number. Now, in this case, since we have defined the service, we do not really have to explicitly define the account number here. So this is the high-level overview. So I'll copy the same bucket and let's go back to SC. And within the next bucket we made, the Plash Central Conflict, let's do something similar. I'll go to the bucket policies, and I'll paste the policy. This time, the cloud trail must be replaced with configuration. Let's do it. At both instances, I'll also maximize so it's easier to read, and we'll copy this ARN. ARN is usually available when you open the permissions and select Bucket Policy. ARN is something that you'll be able to see on the top. So we'll replace this specific ARN on both cases and make sure that the star is still present over here. Once you have done that, you can go ahead and click on "Save." So now we have s three bucket which is created for both cloud trail and confit. And that's three buckets; each has relevant bucket policies. So, coming back to a different browser where we have logged into a development account, I'll specify the SD bucket, which is Pap’s central cloud trail. You can proceed by clicking on Create Great. So once created, you will see the trail information in the trail tab in the cloud trail console. The configuration is the next thing we'll do quickly. So this is what the configuration console looks like. Let's click on "Get started." Now, here, you can include both resources. So since this is the demo for centralized logging, I'll just leave it at default. And here are three options. One is to create a sebuktet. Second, choose a bucket from your account. If you select this, you will have to choose the bucket from this development account from where we are creating the config. And the third is to choose the bucket from another account. So I'll select the third option, and we'll put in Kplab's central hyphen configuration. And now let's go a bit lower. Now there are two options. If you already have a role in your account, you can choose it. I'll just select to use an existing AWS configuration service link role. I'll click on "next." Now we'll just skip it for the time being, and let's click on confirm. So it takes a small amount of time for the resources to be discovered. So let's go back to our three consoles. So within the next three console lets, just decrease the zoom. And now let's go to the overview, and you'll see there is a new directory created called "AWS logs." If I go inside, you have the account number from where the logs will be coming from. Within the confit, you have a confit write ability check file. So in AWS confit, whenever you specify the bucket, it basically generates a confit write ability check file to verify whether the permissions are present or not. So this also allows a system administrator to make sure that since the AWS configuration service was able to write this file, we can make sure that whatever new log files that will be generated by the conflict will also be written. So this is one quick way to note it. Similarly, we have a bucket called "Plat Central Cloud Rain" in which the AWS logs directory is also present. Then you have the account ID, and yet there are no logs that are generated as of now. It takes a little amount of time for the cloud trail logs to be generated and sent. So you can perform certain activities within the cloud trail, and within a few minutes, you'll typically see that the logs will be present. However, if you see this directory and the account number was automatically created because this is something that Cloud Rail does, So from here, we can ensure that whatever bucket policies we have defined are correct and that the logs will come in perfectly. So this is the high-level overview of how you can send the cloud trail and the configuration locks cross-account to a centralized S3 bucket. So with this we'll conclude this video. I hope this video has been informative for you, and I look forward to seeing you in the next video.
10. S3 Bucket Policies
Hey everyone and welcome back. In today's video, we will be discussing the H-three bucket policies. Now, generally, one of the limitations when it comes to im is that it is only used with certain principles, like the AWS user, the im group, and the im role, as well as within the principles of an AWS account. However, when we speak about S Three Bucket, S Three Bucket is one such entity that needs a lot of granularity, and that granularity cannot be achieved with the help of Im. And this is the reason why we have S-three bucket policies, which we can attach directly to the S-three bucket. So let me give you a few examples so that you'll understand the need for SD bucket policies. So if you look into the exam study guide, this is the security specialty exam study guide. So this is a PDF document, and this PDF is actually hosted on top of AWS S Three.You can directly make a curl request to verify, and within the response, you will see that the server displayed is AWS S Three. Now, along with that, you also have this Amazon cost calculator. So this is the application, which is again hosted on S Three. So if I do a quick curl on this application as well, you'll see the server is returning as Amazon S 3. So a lot of organisations use Amazon S3 for hosting websites, hosting web applications, and even hosting various audio files, video files, and others. So, depending on the use cases, certain restrictions would be required on top of the S3 bucket in this scenario. So we will be speaking about certain use cases in today's lecture. So let's begin with one of them. Now, for the demo, what we'll do is I'll create a bucket. I'll say "Kplash Democrats." So this is the bucket that we'll be creating in the North Virginia region. Let me go ahead and create a bucket. Now, within this, you will see that there is a bucket created, and the access to that bucket is not public. That means that no one in the public will be able to directly access any files contained within this bucket. So let's do one thing. Let's go inside the bucket and upload a sample TXT file. I click on "demo TXT" and I'll go ahead and upload 'll go aheaSo this is the demo TXT file. Now if I just click on it, it will give you the link to open or download that specific file. So let me copy the link and, within my console, I'll try and do a call on this specific link, and as expected, it is showing you access denied. The reason for this is that the S Three Bucket has only private access by default. Now, if you want to have a scenario where whoever tries to visit your file, he should be able to read it from any part of the world. So, one of the easiest things that you can do is make this specific file public, and everyone will be able to read it. However, there are even certain scenarios where you only want certain IP addresses to be able to access those files. Apart from those IP addresses, no one should be able to read that specific file. And all those specific configurations related to IP addresses can all be configured with the help of bucket policies. So let's do one thing.
Let's go to the permissions, and there are certain permissions that are associated. So let's go back to the history, and within our bucket, I'll click on this, and I'll go to properties. Within the properties, I'll go to the permission, and within the permission you have a bucket policy. So this is a bucket policy where you can tune various access controls for this specific bucket. So, the first access control that we'll be looking into is the IP address-based conditions. So I have a sample bucket policy for AWS's three. Let's quickly copy this bucket policy and I'll paste it over here. Now, what this bucket policy says is that the principle is an asterisk and the action is "S Three Star," which means all the actions and the resources you have to give the resource name of the "S Three Bucket" here. So I'll say Kplash Democracy, and the IP address would be the IP address at which you would like to access this specific S-3 bucket. So let me do a quick "what's my IP," and I'll find my IP address. And basically, what I'll do is copy the IP address and put it over here. So this is a single IP address. You can even specify the range of subnets, like 20, 416, and various others.
So once you do this, just click on "Save." I'll be posting these bucket policies below the videos so that you can download them and practise them at your end. So, now that you have put your bucket policy in place, what would happen is that whenever you make a request to the S Three bucket, the S Three bucket will verify whether the request is originating from this IP address, and if yes, then it will allow you to perform all the actions except that currently, if you see this bucket, it is private. Even though it is private within the bucket policy, we can specify the various access control specific rules, which will take precedence. So now let's do one thing. Let's go inside the bucket. I'll copy the link, and now, you see, I am getting the message. This is a demo lecture from Z. So this is one such example of a "Three Bucket" policy. Now, there can be various advanced bucket policy configurations that you can do, but for a basic understanding, I hope you understand what the bucket policy is all about. In the next video, we will be discussing in great detail various bucket policy-specific configurations. When we speak of cross-account three-bucket access, this is it. I hope this video has been understood by you, and I look forward to seeing you in the next video.
11. Cross Account S3 Bucket Configuration
Hey everyone and welcome back. In today's video, we will be discussing the Cross Account's three accesses. Now, cross-accounts-three-access is a pretty common use case that you will find in a lot of organizations. So, in terms of the real-world scenario and even the exam, we must understand how to gain access to the Cross Account's three buckets. Now, in order to understand this, let's take an example where an organisation has two AWS accounts. Now, the account A has all the S buckets created, and the account B has all the EC instances. Now, the EC2 instances from the account B need to periodically back up all the data to the three buckets, which reside in the account A. Now, the question is how to achieve this use case. Now, we have already seen that three buckets are by default private. That means no one outside of the AWS account will be able to access it. Now, another use case here is that a different AWS account altogether should be able to access the three buckets. Now, this can again be achieved with the help of "three bucket" policies. So let's go ahead and look into how we can achieve this use case. So we are in the Kplabs demo crossover bucket, and within this I'll go to the permissions and I'll click on bucket policy. So this is the bucket policy that we had created earlier. Now, within the bucket policy that we had created, if you'll notice the principal here is "asterisk," that means everyone, and then we are specifying the condition over here. Now, whenever it comes to crossing accounts within the principle, you need to give the name of the destination AWS account that would be accessing this specific S3 bucket. So for our demo purpose, I already have an example of a three bucket policy that is created. I'll be putting this again below the lecture so that you can try it out yourself. And I'll pace that sample policy over here. Now, within the principle over here, if you see, I have a principle, which is the ARN of the destination account.
So assuming that the current account where the SV bucketis residing is account A and account B wants toaccess this specific s three bucket, then here we haveto specify the principle associated with the account B. So within my account B, I'll just copy the account number, and let me quickly verify the account number. All right, so this is the new account number that I have pasted. Now, so this is the principle: the effect is allowed. The principal is this account number. The action is a three star.This means that all actions will be permitted, and the resource will be KP Labs Democrats. So this is the simple "three bucket" policy. Now, do remember that we do not really need to remember to write the "Three Bucket" policy from scratch. There is a bucket. You have an Im policy editor. You will have a lot of bucket policy examples that you will find in the documentation. The only thing that you'll need is to be able to do a Google search in the documentation to find the right examples as well as be able to read what this three bucket" policy really means. So, once you have done this, you can go ahead and click on "Save. Perfect. So now I have saved it. So what we'll do now is let me go to the IAM, and within the IAM, I have created an imuser called account B, and this imuser has administrator access and a secret key as well. So, coming back to our CLI, within the CLI,what I have done, let me quickly show you. I have two accounts. So this is the access and secret key associated with the account A, and I have access and a secret key associated with the account B, and we'll try and see how exactly we can work around it. As a result, the first thing we do is AWS S three LS. I'll specify the path of KP Labs' Hyphen Democrats over the profile I'll be posting: account A, and I'm able to successfully see the contents of the SV bucket. Now, in a similar way, I'll put this thing in the profile as account B, and currently it is showing as "access denied." Now, the answer to why this error has been occurring is quite interesting, and I have seen that in exams, a lot of people lose their marks for specific questions that actually pertain to the use case that we are discussing.
Now, the problem with this bucket policy, let mequickly open the problem with this bucket policy isthat we are actually giving access to the contentswhich are within the s three bucket. So if I show you quickly, if I do three CP, I'll say demo TXT. So there was a file demo TXT. You see, I am able to successfully download a demo TXT file. However, when I try to list this specific bucket, it is showing as "access green," right? So what you'll need to do isbasically you need to create an array. So there will be two ERNs that will be specified over here. Let me make this policy much more tuned in.Now I'll copy this once more, and I'll paste it again. Let's format it, and I'll remove the asterisk to close the array. Now, the reason why the error was occurring is because currently in this specific ARN, we are actually allowing access to the contents, which are within the S-3 bucket. So there is an asterisk. So that means within the "s" bucket, inside the "s" bucket. But for the "three buckets" itself, we are not allowing any permission. So if you look into the ARN of these three buckets, this is the ARN. So for this ARN we are not giving any permission; we are just giving permission to the contents that are within this specific SD bucket, and this is the reason why we were actually receiving the error. So what we'll do is go ahead and click on save, and now that the policy is saved, we'll go back to our console. Let's run the same command as earlier. And now you are able to successfully see the demo TXT file. Do remember that it is very important to remember the difference between why we are actually putting two ARNs within the resources field over here. The chances are that this would actually give you an opportunity on your certification exam. Such questions have been asked.
So coming back to the topic, we have already confirmed that the account b has access to the SA bucket. So what we will do now is create a file called cross TXT, and within cross TXT I'll just say this is a file from account b. I'll go ahead and save that file. So now we'll make a copy. I'll do AWS three CP, I'll specify crossTXT, and I'll copy it to the kplash demo. The profile that I'll be putting in is through account I'll copySo now you see that the cross TXT has been uploaded to the SC bucket. So in order to quickly verify, I'll refresh the page, and now you see there are two files. One is demo TXT and the second is cross TXT. So far, TXT has been uploaded through their users' access to the secret keys of account B. And the demo TX is something that we had uploaded from the console itself in the earlier video. So that's it for the three cross-accounts buckets. I hope this has been understood by you andthere is one challenge with this kind of access. So if I can quickly show you, we'll actually dedicate the entire next lecture to understanding this specific challenge. But I would just like to give you a glimpse of what has happened. So if I do AWS s three, let's say a s three TXT, and the profile that we'll be giving is account A, I have to specify the path, and it says it has successfully downloaded. When I try to download thecross TXT, I get 40 "permission denied" and other errors from the console. If you'll notice over here, it is showing access denied. Access denied. Even though I am an administrator user—in fact, I am locked in as a root account—still I am getting access denied. And many people really wonder like they areactually logging in through root account, still theyare not able to access those files. Now the question is why? And this is something we'll go over in the next video about the precautions you should take, particularly when it comes to the Cross Account's three bucket access.
So when looking for preparing, you need Amazon AWS Certified Solutions Architect - Professional certification exam dumps, practice test questions and answers, study guide and complete training course to study. Open in Avanset VCE Player & study in real exam environment. However, Amazon AWS Certified Solutions Architect - Professional exam practice test questions in VCE format are updated and checked by experts so that you can download Amazon AWS Certified Solutions Architect - Professional certification exam dumps in VCE format.
George Right
Aug 18, 2024, 01:48 PM
As a certified specialist from today and on, I can tell that the premium bundle from the Exam-Labs offers you a set of really good materials. I will probably even read the guide during my work to get tasks done or revise some information. There are a lot of important details in it. I liked the style as well, very easy to read and understand the material of the AWS SAP-C01 exam.
RektHa
Jul 23, 2024, 08:59 AM
I passed the exam and earned my Amazon AWS Certified Solutions Architect – Professional certification. I am very happy about this fact. Thanks to Exam-Labs, I was able to get a great score and ace the exam in no time. I bought the whole bundle, and it was worth it.
QinLin
Jul 7, 2024, 01:47 PM
I can assure that the materials from this platform are valid and relevant. I tried the bundle, and my friend got the simulator and some free braindumps, so we could test out both free and paid materials. Simulator is for the practice tests above, which you can download in VCE. Practice tests work well in simulator and bundle has all that is mentioned. Free options may have less variety in questions, but they are also useful for the practicing.