Amazon AWS Certified Cloud Practitioner CLF-C01 Topic: Exam Preparation
December 13, 2022
  1. Overview of Cloud Practitioner Exam

Hey everyone, and welcome to the exam preparation module for the AWS Certified Cloud Practitioner exams. So this section will contain some of the important pointers that are important to understand before we go and sit for the exams. So in today’s lecture, we’ll just give you an overview of what you can expect and what you should be preparing for. So this is the official certification page from Amazon. So if you go a bit down in relation to the pricing, the exam is 100 USD. Again, as I have stated numerous times, if you are considering a career in technology, I strongly advise you to pursue the AWS Solutions Architect Associate Certification.

Because the Associate certification is 150 USD, it basically contains all the topics that are covered as part of the certification for a cloud practitioner. Otherwise, what you will have to do is, after you give the exam again, prepare for the Solutions Architect Associate exam. Again, you pay 150 USD for that exam. So if you have good money, maybe you can look forward to this. If you are a technical person, go for the Solutions Architect Associate. If you are not into the technical side or if you’re into sales managers or scrum masters, then this exam is suitable for you. Perfect. So, if you look at the exam guide, you’ll see that I already have it up and running. AWS has actually given you some of the important material that you need to go through. So there are AWS white papers related to an overview of Amazon Web Services, the architecture of the cloud, AWS pricing, the PCO, and the AWS support plans. As a result, these white papers will be useful to you as you prepare for your exams. It is always recommended that you read the white papers. I would encourage you to continue reading white papers now that you are doing so.

So you have to make it a habit because when you go for the higher certifications like AWS Solutions Architect Professional, you have to read fewer white papers. So that is a must-do. So just read, go through what these white papers are, and it will definitely help you in your exams. Now, back to the passing score, which most of you are probably curious about. These exams, you see, are graded on a scale of 100 to 1000, with a 700 minimum passing score. From what I have seen and what I have heard from most of the students, the exam is relatively simple because there are scenario-based questions that are part of it. It’s as if there’s a question about which AWS service will assist you. So scenario-based questions can be difficult at times, even though they are generally not part of this exam, which has very few questions based on scenarios. So again, these are the four domains that will be included, and again, this certification validates Exam Mini’s abilities to do this many things. So we’ve talked about these throughout our videocourse, and in the upcoming lectures, we’ll go over some of the key points to remember before you set for this lecture, and I hope to see you in the next one.

2. Important Pointers – Part 01

Hey everyone, and welcome back. Now in today’s video, we will be focusing on some of the important pointers for the exams. And today’s focus is on AWS Core Services. Understanding some of the core AWSServices is now critical for the exams. So in today’s video, we will be focusing primarily on the things that I have listed here, which is EC Two. You have Elastic Block Store, you have S3, you have the AWS Global Infrastructure, you have the VPC, and you have the security groups. Now the basic foundation for everything remains the AWS global infrastructure. Now, within the global infrastructure, there are three primary concepts that you should understand. The first is region. Then you have availability zones. And third, you have Edge locations. So regions are basically physical locations. Spread it across the globe to host your data. As a result, AWS offers a variety of regions. And depending on which region is near you, you should generally host your servers there. So basically, by you, I mean the client. So let’s say that I am based out of India and that most of my clients are from the US region. So what I do is host my website in the US region. Now, if I host it in the India region, then the clients will have a lot of latency. So the website and the web application will go slowly. So it is best practise to host your servers closest to the region where your clients are.

Now the second thing is the Availability Zone. Now the availability zone is the combination of one or more data centres within a specific region. So, while you create an EC2 instance, you can select which availability zone you want your EC2 instance to be in. Now the best practise for high availability is that you should launch the servers in a distinct availability zone. So let’s say that there are two availability zones within a specific region. So you should not launch all of your instances in only one availability zone. In that case, what would happen is that if that availability zone went down, all of your servers would go down. So the best practise is to spread your instances across availability zones. So if a given region has three availability zones, and if you want to launch three servers of one application, then launch one server in each zone. AZ 2’s second server. AZ three’s third server. So this is how you should spread out. Now the third concept is edge Edge Location.The edge location is now where end users access the AWS service. Edge location is now primarily used extensively with cloud front for caching-related purposes, so that users can access the website faster. So just to revise the concept So this is the ECQ dashboard. Now, if you look here, this is the North Virginia region. So for the North Virginia region, it basically shows you the overall service status, and then you have the list of availability zones. Now, the North Virginia region is one of the regions where there are a lot of availability zones that are available.

So you have a total of six availability zones. In comparison, consider the Mumbai region, which has two availability zones. So every region has a different number of availability zones that are present. And also, do remember that there are multiple regions that are available here. So these are the listed regions that are, as of now, available in AWS. The next important core service in AWS is the Elastic Compute Cloud, which is EC Two.Now, before you can launch an EC2 instance, you will have to select a region. Now, we have already discussed how you can go ahead and select the right region for you. So let’s say that once you have selected the region and you want to launch an EC2 instance, you have to choose an operating system. So basically, you have various kinds of operating systems available. like you have Linux. Now, under Linux, you have Amazon Linux, you have Ubuntu, you have Red Hat, and you have various others. Then there’s the window. Now under Windows, you have Windows 2012 Server, Windows 2008, Windows 2016, et cetera. So each one of them forms an AMI. So you must choose the base AMI for the EC2 instance, whether you want to run RedHat, Amazon Linux, or something else. All right? Now, any AMI that you choose has not only its own operating system but can also have its own set of software configurations, and we can launch multiple images from a single AMI.

So let’s go back to the console. So if you go to launch instances here, the first step is to choose an AMI, which is the Amazon machine image. Now, as we were discussing, there are multiple Amis that are available. So you have Amazon Linux. You have two options: Amazon Linux and Red Hat. Eight. In fact, this is the newest one that is available. Then you have Susie, then you have Ubuntu, and various others. You also have Windows Server 2019 and so on. So you have to select the right operating system for your EC2 instance. Now, the third important concept is the elastic block store. Now, AWS EBS, which is the Elastic Block Store, provides persistent block storage volumes for your two easy instances. Now, here we have discussed the difference between EBS and an instance store volume. So, for instance, a store volume is not persistent. EBS is persistent. That basically means that if you stop and restart the instances, they will continue to work and the data will not be lost. So the data in the EBS is persistent. Now, the EBS data is automatically replicated within a single availability zone. So basically, you create an EBS in an availability zone, and the data is automatically replicated within the same availability zone. EBS volumes are now scalable, allowing you to easily scale up or scale down. Now, EBS also provides the feature of snapshots, which is generally used for backing up the EBS volume. And once you take the snapshot, you can share it across multiple regions.

So let’s quickly look into this as well. So let’s go back to the elastic block store section. So here are the volumes. So let’s create a new volume here. The size that I give is one GB. And again, within the volume type, there are multiple volume types that are available. Each has a different use case. Now, we already discussed that the EBS is availability zone-specific. As a result, you must decide which Availability Zone you want to create your EBS volume in. Now, if I say that this EBS volume is created in US East 1, a region, that means that it will be replicated within that region. Now, you can also encrypt the EBS volume. So you can use this flag to encrypt the EBS volume. However, we’ll keep it simple. Let’s create an EBS volume here. All right? So this is the volume that is created, all right? So since this volume is not currently attached to any EC2 instance, the status is available here. Now, if you go to modify volume, you can go ahead and change the size of this volume. So I can say that now I want the volume size to be 2 GB, and I click on Modified. Let’s click on “yes.” So now it says that the volume request modification has been successful, and currently you can see that the state is available and the optimising state is 99%. So it generally takes a little bit of time to be 100% optimized.

So this is how you can scale up or down. Now, the next concept that we were discussing here is the Snapshot feature. So let’s say that you have very important data within this EBS volume. Now, we have already discussed here that each EBS is replicated within the same availability zone. And if the availability zone goes down, then your data within the EBS will not be available. So you can take a snapshot of the EBS and move it across the regions for backup. So in order to take a snapshot, you can go ahead and click on Create Snapshot. Let’s call it the First Snapshot here, and you can go ahead and create a snapshot. So this is the snapshot. ID. You can click here, and the current status is pending. It will also show you the progress. So let’s quickly wait for a moment here. All right, so the volume snapshot has been created. Now, as we were discussing, we can go ahead and copy this snapshot. So I can go ahead and click on Copy, and here I can specify the destination region. So let’s say the destination region is Mumbai. I’ll go ahead and copy it. All right, so the snapshot copy has been initiated. So let’s click on “close.” Now we’ll go to the Mumbai region. Let’s remove the snapshot identifier here, and you will see that the snapshot is available, and from the description, you will be able to see from which region this snapshot has been copied from.So this is how you can copy the snapshot across the region. So, within the volume, let’s say in the North Virginia region. So this volume is basically created in a specific availability zone.

So now you have the option to copy the snapshots across the region, which is quite a useful feature. Now, along with that, if you want to see the permissions, let’s say that you want to share this specific snapshot with another AWS account. So you can edit these account settings, and you can specify the account number. So, if you have two accounts and you want to share this snapshot with a different account, this is something that you will be able to do. Now, the next important core AWS service is S 3. You should be aware of the various storage classes that S3 provides. And depending upon the storage classes, you can get a use case and a use case where they’ll ask you which storage class is more accurate for that specific use case. So you should be aware of what each of these storage classes is all about. So you have a general-purpose storage class, which is recommended for frequently accessed data. So, if you have data that you know will be accessed frequently and want a good amount of redundancy and access time, general purpose is the simplest option. So this is like the default one. So whenever you go ahead and upload your object to Storage Group Three, and if you do not explicitly select any other storage class, general purpose is used. Then there’s the infrequent access storage class, which is typically used for long-lived infrequent access data.

As a result, any data that you want to keep for a long time will have the same amount of durability, but its availability may be reduced. So it might happen that sometimes you want to access the data. You might not be able to do that, but that is very minimal. So the availability aspect is compromised. Then you have reduced redundancy, where you want frequently accessed but non-critical data. So the redundancy aspect is compromised. In the second point, the availability aspect is compromised. A third question is whether the redundancy aspect is compromised. So since the redundancy aspect is compromised, it might happen that your data is lost. As a result, you now have noncritical data that has been specified. Then there’s intelligent tiering, which is used when you have data that you know should be long-lived but don’t know the access patterns. We’ve already talked about intelligent tiering because there’s a significant price difference between general purpose and infrequent access. Depending on the access patterns, intelligent tiering will automatically move your data between the frequent and infrequent tiers.

So this is a great option. Then you have one zone with infrequent access. As a result, this is typically long-lived and infrequently accessed data. Again, this is for noncritical data, since in this case you will have to compromise on the redundancy because only one availability zone is used to store your data. So, once again, this can be risky because if the availability zone fails, your data will be inaccessible. As a result, it is mentioned again here that it is used for noncritical data. Now, one of the advantages of One ZoneIA is that it is quite cheap. Now, we have already discussed the pricing aspect—the differences in pricing between general-purpose and infrequent access in One Zone IA. So you can go ahead and refer to the slide in “no mode.” Then you have the glacier-deep archive. Now, Glacier Deep Archive is generally used to store archive data that you know will be rarely accessed. so your retrieval time is in hours. You cannot get your data back within minutes. So you have to decide accordingly. Now, on the contrary, you have Glacier. Again, the Glacier is used for archiving data, with retrieval time measured in minutes for us. In a glacier, retrieval time can be measured in minutes.

So, if you know you have archived data that you rarely need to access, but you need it for compliance or forensics, you may want to retrieve it right away. Then you should store it in the glacier and avoid the glacier’s departure. Now, the last topic for the part one video is the virtual private cloud. Now, Virtual Private Cloud allows us to define a custom network for our AWS resources. Now we can implement minute controls on this custom network. So going back to the EC2 console, let’s click on “Launch an instance” here. So whenever you select instance, let’s say T twomicro, and go to the configuration details, you’ll see this. So here you have the option to select the “Virtual Private Cloud.” So this is the VPC. So this is the default VPC, and all of these configuration settings are for the default VPC. So you can also go ahead and create a new VPC where you can define your own custom network. So currently, this VPC has a CIDR range of 170 to 310 00:16. You can have your own different CIDR block altogether. Now, you can also have VPC-specific configurations, like Nat Gateway, which is VPC-specific. So you attach a NAT gateway to a specific subnet, and then you have other network ACLs that are also associated with a subnet and VPC. You can define your own custom VPC, where you can do all of your custom configurations, or you can leave the default VPC. You can also create custom configurations on top of the default VPC. 

3. Important Pointers – Part 02

Hey everyone, and welcome back. Now in today’s video, we will be discussing some of the important points. For example, our focus today would be the security services. Now, there are certain security-specific areas that you should be aware of for the exams. So these are some of the very, very important areas that you should know before you sit for the exams. So let’s go ahead and discuss them. Now, the first one is the “shared responsibility” model, and this is something that you should be aware of. Now, if you create a service or launch a cloud server, you do not have complete control over that server. So some of the control will be given up by the cloud service provider. In this case, it would be a s, and some of the control would be at the customer level. So let’s say that during a breach, when a server gets hacked, there should not be a conflict over whether AWS was responsible for that or a customer was responsible for that. So there should be a clear distinction that the customer should know about what AWS is responsible for and what customers are responsible for. So that is defined within the “shared responsibility” model.

Now, AWS is responsible for the physical security of facilities as well as the infrastructure that includes compute, database storage, and networking resources. So let’s say that there is a bad guy who managed to get into the data centre and steal some information from there. So that is something that we, as users, do not have control over. So if those kinds of things happen, then AWS is responsible for them. As a customer, we are not responsible for that. Now the customer is responsible for the software, data, and access that sit on top of the infrastructure layer. So let’s say you have created an EC2 instance and have kept the firewall open for the entire world. So in that case, if the system gets breached, then it is the customer’s responsibility. The customer is responsible for the firewall. Again, firewall security group services will be provided by AWS. Now you have configured the security group service, and it is not working as expected. AWS is responsible for that. But you have to make sure that you have the right access rules within your security group. You also have to make sure that whatever software you have in your EC2 instance is not vulnerable. Now, if you are hosting software in an EC2 instance and the software has a lot of security vulnerabilities due to which it gets breached, then it is the responsibility of the customer. So I hope you have a clear distinction between the responsibility matrix of AWS and the customer.

Again, you can read a white paper on shared responsibility, which will give you great insight into this. Now, the next very important aspect is identity and access management. Now, there are three important areas. The first are users, the second are groups, and the third are rules. So far, we’ve established that we should never use root in AWS. So whenever you create an AWS account, by default, you will have root credentials. So once you get the root credentials, the first thing that you should do is create an IAMuser and give an appropriate IAM policy to that user. Now you can also set up multifactor authentication for the root user as well as the IAM user. Now you can also create access and secret keys that can be used for AWS CLI operations. So you should be aware of how you can configure the access and secret keys for the AWS CLI operations. Now, along with that, for the EC2 instances, we should never configure access or secret keys within the server. We should always make use of the IAM role. Now, IAM policies can also be assigned to the Im rule. So let’s say that EC-2 instance wants to upload some data to the S-3 bucket. So you should not configure access or a secret key.

There, you can attach an im role to that EC2 instance, and that im rule can have a policy that allows the upload of data to a specific S3 bucket. So this is how you should design things. And do not forget multifactor authentication, both for the route as well as for the IAM users. Now, the third important thing of which you should be aware is the AWS shield. Now AWS Shield is a dedicated service that protects against distributed denial-of-service attacks. Now, there are two variants of AWS Shield that are available. One is Shield Standard. The second option is shield advanced. And, for the exams, you should understand the distinction between Shield standard and Shield advance. Now this specific table will give you an overview of the differences between them. Now one of the big differences here is the DDoS response team’s support. So for the Shield standard, you do not automatically get access to the DRT team. However, for advanced students, you get a response to that. So let’s say that there is an active DDoS attack going on in your infrastructure. So you can connect to the dedicated team to resolve your issue here.

All right, for the standard, you do not get that. Along with that, for the standard one, you do not have a lot of features like layer three, layer four, attack notification, or a historical report. Then you also do not have the feature of cost protection in this standard one. However, for the advanced one, you do have all of these options, but again, advanced does require a dedicated cost. So there’s a little caveat there. Now the next important feature is the AWS Inspector. Now, AWS Inspector is similar to a vulnerability scanner, which will scan a system for specific assessment rules and provide the details accordingly. So there are various types of rule packages that are available. Some of them are like CVE; you have CIS; you have Runtime; and various others. As a result, AWS Inspector is widely used in CBE. So, in essence, you have the Inspector agent installed on the server, and that Inspector agent will scan your server for vulnerabilities. So all of these are basically the security vulnerabilities that are present within the server, and these vulnerabilities need to be patched. As a result, AWS Inspector is a fantastic service organisation for CVE scanning. The last important pointer for today’s presentation is security and compliance.

Now, AWS services are compliant with various major industry certifications, like PCI DSS. You have ISO. You have FedRAMP, FISMA, HIPAA, and various others. Now, whenever you go through an audit, let’s say that your company is going through a PCI DSS audit. Now, the auditor will ask you for the compliance attestation certificates for services like SC. So let’s say you are saying that all of your backups are stored in Se, or an item might ask you for a certificate of compliance with PCI DSS. So if you want certain certificates related to PCIDSS, Pedram, or FISMA to show to your auditor, then you can make use of the AWS Artifact Service. Now, AWS Artifact Services is a service that provides on-demand access to the AWS Security and Compliance report and select online agreements associated with it. So, once again, Artifact is a fantastic service for this. 

4. Important Pointers – Part 03

Hey everyone, and welcome back. Now in today’s video, we will be discussing part three of the important pointers for the cloud practitioner exams. And today’s video will concentrate on the services that are typically useful during overall deployments. So these are some of the services that we’ll be discussing throughout this video. So let’s get started. Now, the first important one is the AWS cloud formation. So AWS cloud formation basically allows us to deploy the infrastructure in the form of code. Remember that whenever you get a question in an exam about which service will help you during the infrastructure as code, the obvious answer is cloud formation. Now, cloud formation supports almost all of the AWS services.

Now that I have said almost, it’s not always 100%. So let’s say that there is a new AWS service that was just launched. Then it might happen that that specific AWS service is not yet supported with cloud formation. Now, a cloud formation by itself is a free service, so you do not really get charged for the cloud formation, but the resources that are created through cloud formation templates would be charged. So let’s say that you have written a CloudFormation template to create an EC2 instance. So when you run that cloud formation template, you will not get charged for the cloud formation, but you will get charged for the EC2 instance that gets created with that cloud formation template. Now, the second important service is AWS Elastic Beanstalk. Now, Elastic Beanstalk allows us to simply upload your code. So let’s say that you have written an application in Java. You can just upload your Java application here. And Elastic Beanstalk takes care of things like deployment, capacity provisioning, load balancing, auto scaling, application health monitoring, and so on. So this is great because, let’s say, a developer doesn’t know what capacity provisioning is, or how to configure auto scaling with load balancing, and so on.

The code is the only thing he knows and the only thing he needs to concentrate on. So Beanstalk is a really great service, and this service is extensively used by the developer, who can just upload their code and rest. Everything would be handled by Beanstalk. Beanstalk is now limited in what it can offer. It can, for example, make provisioning easier. For instance, it can do RDS, load balancers, security groups, etc. It simply cannot do everything. It has limitations in terms of what an elastic beamstock can offer. Now, the third important service is AWS Lambda. Now, AWS Lambda allows us to run code without provisioning or managing servers. And it is for this reason that it is so effective in serverless architectures. Now, in AZ Lambda, you only pay for the compute time that you consume, and there is no charge when your code is not running. So, if your code only runs for ten minutes in 24 hours, you only pay for the ten minutes. You do not really pay for the remaining 23 hours and 50 minutes. Now, if you go with servers, you have to keep them running all the time, so you have to pay for the servers all the time. So serverless is really a great thing, and a lot of organisations are moving towards it primarily because of the advantages that it offers. Now even in AWS Lambda, all you have to do is upload your code, and Lambda takes care of everything, so you don’t really have to worry about high availability, provisioning, et cetera. Everything is taken care of by AWS Lambda. Now the next important service is AWS CloudFront.

Now.AWS. Cloud Front is a content delivery network service. So, if you see a content delivery network, or CDN, in the exam, CloudFront is the right service right away. Now Cloud Front is also one of the services that helps during distributed denial-of-service attacks. Do remember this word, “DDoS.” Now that this is a serious service, one of the important capabilities is caching. So the data can be cached across multiple edge locations across the world. Remember that the edge location is distinct from the AWS region and availability zone. We typically have an origin in CloudFront. So origin is basically the place where the source content resides. So let’s say that you have a CDN for your website. Where will your website be stored? This is what origin means. So your website can be stored in S3, your website can be stored in EC2, for instance, et cetera. That is the definition of origin. So you can even have the origin of ELBroute fifty-three, S-three, and EC-two. Now, speaking about the database primer, do remember that there are various kinds of databases. One is relational databases, which are also referred to as OLTP. So RDS is one of the examples of that. Then you have the NoSQL database, and one of the examples of that is DynamoDB. Then you also have a data warehouse, which is RedChip and is for OLAP, and you have in-memory databases, which are Redis and Elastic Cache. So within Elastic cache, you have MEMC cache, and you have Redis. So typically, it should be Redis and Memcached here.

Remember this because they may ask you “OLTP” in exams, so when they say “OLTP,” remember that it’s similar to “relational” and “RDS.” Then, if they ask you for the NoSQL database, the answer should be DynamoDB. Then, if you have a database warehouse, it should be Redshift, and so on. So you should also be aware of the automatic scaling. So auto-scaling allows us to scale up or scale down the servers depending upon the overall demand. So here, you can specify the minimum and maximum. So you can specify a minimum size, which basically says that at any given instant of time, a minimum of one instance or a minimum of two instances should always be running. Now, whenever the load increases, what is the maximum? Instances that can get launched are something that you can also define. Now, for the exams, at least for the Cloud Practitioner exams, you will be quizzed on how this would work in practice; this is something that is typically asked during the Associate Level Certifications. Now, the next thing that you should remember is the EBS volume types. Now, primarily, they are divided into SSDs and HDDs. Now, the SSD types are typically general-purpose SSDs, also known as “GP 2,” which balance price and performance for a wide range of workloads. Then on SSD, you also have provisioned IOPS. So let’s say that for certain applications, you really need a very fast disc that is faster than the general-purpose SSD.

So in that case, you can make use of the provisioned IOPS, which are also referred to as the IO ones. All right, so this type of disc is generally used for workloads that are mission-critical and require very fast disc drives. Now the next ones are the magnetic drives. Under magnetic drives, you have the throughput optimised, also referred to as the “stone.” So this is a low-cost HDD for frequently accessed data. Then you have the cold HDD, also known as the “C one,” which is again the low-cost HDD that is specific for the data and has a very low frequency of access requests. All right, so these are the two primary types here. Now, there are certain miscellaneous important points that you should remember. Do remember that there are three primary ways in which we can access the AWS environment. One is through the AWS console, which is the GUI. The second is through the AWS command-line interface, which is the CLI. Third, with the assistance of the AWS SDK. So do remember these three important points. Now, you should also remember certain common port numbers. So port 22 is used for SSH, and port 80 is used for HTTP/4. Then you have 3306 for the database, which is a MySQL type, and 3389 for the RDP. Now, along with that, do remember that AWS offers the Route 53 service, which is typically used as a DNS service hosted and managed by AWS. Along with that, you should remember the various types of load balancers that are offered. So there are three primary types of load balancers. One is the application load balancer, another is the network load balancer, and a third is the classic load balancer.

So the application load balancer is for layer seven traffic, which is the HTTP and HTTPS-based traffic network. Load balancers are typically used at the network level. It has very fast performance, and we can also associate a static IP address with the network load balancer. Now, for the classic load balancer, this is generally used for development and testing purposes. Now, the last slide for today’s video is the cloud trail and cloud watch. So Cloud Trail is basically used to log all the AWSAPI calls, and it is enabled at the per-region level. So let’s say that there is a certain person who is doing malicious activity and that certain compromises have occurred within your AWS environment, and you want to see who has done exactly what. In those cases, Cloud Trail is an excellent service because it encrypts all AWS API calls and shows you exactly who has done what within your AWS service environment. Now, you also have a service called “Cloud Watch.” Cloud Watch is primarily used for monitoring, and it can monitor various things like CPU, disc network utilization, and others.

5. Important Pointers – Part 04

Hey everyone, and welcome back. Now, in today’s video about the important points for exams, part four, we will be discussing the billing aspects. Now, as far as the billing aspects are concerned, there are certain important topics that you should remember for the exams. One is an AWS organization. You’ve got EC-2 pricing. You have the AWS support plans, tags, and resources, as well as the AWS pricing calculator. So let’s get started. The first of these is AWS organizations. Now, one of the primary features of AWS organisations that you should remember is that it allows for consolidated billing across multiple AWS accounts. So there are organisations that have more than 100 accounts. So in such cases, it is not really feasible to go and log into every Iterate account and look at the bill. So what you want is to centralise the billing structure so that within a single centralised EWS account, you will be able to see bills for all 100 accounts.

So that aspect is possible with the help of consolidated billing. Now, when you link the accounts, do remember that for consolidated billing, you have to link the accounts. So in such cases, the AWS organisation will not have access to the resources of that linked account. So in consolidated billing, although the accounts are linked, the resources cannot be accessed even with a linked account. All right, so consolidated billing also allows us to have volume discounts on services, and reserve group instances are applied to all the accounts if they are part of the single consolidated billing. Along with that, the next important aspect is the ease of pricing. You should have a high-level overview of the differences primarily between OnDemand, Reserved, and Spot. So these are the three most important things that you need to remember. You should also know about the dedicated host. So in demand-driven instances, billing is pretty straightforward. So you pay a fixed rate on a secondary basis without any commitment. So you run the instance for one hour and pay one R. That’s about it. For reserved instances, you have to reserve the capacity for a term of one to three years. And because you reserve the capacity, you also get a significant discount.

Now, Spot instances allow customers to bid for unused resources in EC 2. So spot instances are typically used for applications that have flexible start and stop times. So, if you bid on an EC2 instance and someone bids significantly higher than you after five minutes, your EC2 instances will be terminated. Thus, it is important that whenever you are using spot instances, they should only be used for applications that support flexible start and stop types. Now, in spot instances, let’s say that the instance is terminated by the AWS site. Then you won’t be charged for the partial R. If the customer cancels their Spot instances, they will be charged. And the last one is a dedicated host, which is basically a physical EC2 server that is dedicated to a single customer. Now, generally speaking, OnDemand refers to a single physical EC2 instance that contains multiple virtual machines that can be shared by multiple customers. So, as opposed to that dedicated host, there is a single physical server, which is dedicated only to a single customer. Now, there are certain services for which you do not really get charged.

Some of them are cloud formations. You have an elastic beanstalk, you have IAM, you have VPC, you have auto-scaling, and you have consolidated billing. Now, do remember that it’s not like cloud formation, where all its aspects are free. So let’s say that you create an EC2 instance through cloud formation. That EC2 instance will get charged, but the cloud formation by itself will not get charged. So that is one thing that you need to remember. Now, the next important thing is the AWS support plans. Now, there are various support plans that are available. You have “basic,” “developer,” “business,” and “enterprise.” So, at a high level, you should understand what each of these support plans entails and how they are charged. So this is one thing that I would suggest you do: go through the support plans page and look into the feature sets that each of them provides. Now, along with that, you should be aware of the tags and the resource groups. So tags are basically a key-value pair that can be attached to resources like EC2, EBS, and others. And resource groups essentially allow us to group resources with similar tags together.

Now, one great thing about the resource groups is that we can create further automation with that. Assume you can create automation where all easy to instance calls with a tag where envis equals dev are stopped after 08:00 p.m. At night. This is very useful because developers frequently start two EC2 instances that run 24 hours a day, which raises your costs. So you can go ahead and stop all the EC2 instances that have the tag “environment is equal to depth” after 8:00 p.m. and maybe you can start them in the morning at 9:00 a.m. or 10:00 a.m. Now, this above use case can be achieved with the help of AWS Systems Manager, and it can also be achieved with the lambda function. Now, the last important pointer for today’s video is the simple monthly calculator and the TCO calculator. You should be aware of both of them. Now, TCO is also quite important one.So do remember that the TCO is used for comparing the cost of running infrastructure on premises versus in the cloud. Now, it can also generate reports that you can share with the management, and let’s say you’re insisting on your management switching to the cloud from the data center, so you can make use of the PCO calculator to generate reports and share a simple monthly calculator that is quite useful. So if, let’s say, someone is asking you what the cost of an EC2 instance of a specific given type is if it is running for 12 hours, or, let’s say, if it is running for three months. In such cases, you can quickly use a simple monthly calculator: simply enter the EC2 instance side, specify whether it is a reserved spot or on demand, and enter various other factors, and it will quickly provide you with the associated cost. 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!