Amazon AWS Certified Cloud Practitioner CLF-C01 Topic: Security Aspect Part 1
December 13, 2022
  1. Understanding Shared Responsibility Model

Hey everyone, and welcome back. In today’s video, we will be discussing the AWS shared responsibility model. So let’s understand the shared responsibility model with a simple example. So, let’s say that this is your own server room, which your organization is handling. Now that all of these servers and server rooms have been handled by your organization, the entire responsibility falls on you. As a result, responsibilities such as power backup So you need to have a proper UPS so that your electricity does not really get cut off. You also need air conditioners to cool down the servers. The entire responsibility of security, the responsibility of maintenance, and the responsibility of upgrading whenever required falls on your organization.

However, in the cloud, you do not really have access to the physical servers. When you switch from on-premises servers to cloud servers, the responsibility model changes dramatically. So in the cloud model, you have something called a “shared security responsibility model.” Now what really happens here is that AWS has a certain responsibility, and as a user, you also have certain responsibilities. So AWS has responsibilities related to compute, storage, and the database network. So for all the servers that they are hosting, they have the responsibility to make sure that they have appropriate power backup, appropriate internet service provider backup, electricity, the security of the server room, etc. However, as a user, you have responsibility for your application. AWS gives you the option of creating a security group. So you need to make sure that you configure the security group well; otherwise, if the application or the website gets hacked because you have not configured your security group, you cannot really blame AWS for that. As a result, in Cloud, responsibility is always shared. So let’s understand this with a simple example.

So, let’s say that you have an e-commerce-based website located somewhere in North Virginia, and this website is specific for Indian users. So all the Indian users will be visiting the website. Now, the problem that is happening is that since the website is in the US and the users are in India, there is a lot of latency involved over here. Now, you cannot really blame AWS for the amount of latency that might be occurring. It is because of you and the architecture that you have designed. The latency has been an issue. As a result, this becomes your responsibility. By “you,” I mean the team that is taking care of the infrastructure and the organization. So if you migrate the website from the US to India, you will definitely have much lower latencies. So this is the responsibility of the organisation and not the AWS. Now let’s take one more example. There was recently a bug related to Intel Skylake that caused computers to freeze during complex workloads. So there was some patching that was required at the CPU level. Now as a user you do not reallyhave access to the underlying CPU hardware. There is a virtualization layer that is involved. So the patching part is completely the responsibility of AWS.

So, as a user, you don’t have to worry about it; AWS will take care of it or will upgrade whatever steps are required to resolve this issue. So let’s understand this with one more example. So, say you have an operating system and send it to us, and the operating system has a lot of packages, and those packages have a lot of available vulnerabilities. So this is a screenshot: the kernel has a critical level vulnerability, and you have various packets like DHCP, which has a high level vulnerability. So this is your responsibility because you have SSH access to the server, and with the help of the VM update, you will be able to patch all of these vulnerabilities. So AWS is not responsible for vulnerabilities associated with the operating system that you are running. So that again becomes the responsibility of the organization. Now that AWS offers more than 90 high-level services, these services are divided into three categories. One is infrastructure services, and the second is container services.

The third category is abstract services. And the shared responsibility model changes depending on the services that you are using. Infrastructure as a service now refers to services in which the physical component is outsourced so that the physical component, such as hardware, servers, networking, and so on, can be managed. So an example of this in AWS is easy too. You have EBS, you have the VPCs, etc. For the time being, infrastructure as a service is responsible for things like foundation services, regions, availability zones, endpoints, AWS, API inputs, and so on. However, the operating system, the encryption of data, and the application itself are the responsibility of the organization. If the organization has hosted an application on EC2, and if the application has a bug due to which it is not working, they cannot really blame AWS.

However, if the application is hosted on the EC2 instance and the EC2 instance is having some kind of hardware problem, then AWS would be responsible for that in terms of the infrastructure services. Now the second one is container services. Now, container services are the ones that run on top of the AWS-managed EC2 instance. Now here, the customers do not really manage the operating system or the platform layer. One of the easiest examples is the RDS. So let’s say the RDS is being used by the organization. Now the organisation does not really have any SSS access or any access to the operating system of RDS. So they cannot really take care of things like patching, et cetera. So for container services, the operating system becomes the responsibility of AWS because the operating system level granularity and access have not been given to the user, even the backup and patching. So you have monthly patching of the packages that are installed in the OS. So all of those become the responsibility of AWS. However, the customer data, the firewall, the encryption of data, and the access control become the responsibility of users. So let’s say you are hosting a database in RDS and your credentials are leaked somewhere on the Internet. So then you cannot blame AWS for that. As a result, the organization’s role is its responsibility.

The same goes for the fiber. You simply cannot keep the RDS connected to the internet while 3306 is connected to the internet. You need to make sure that the proper inbound and outbound rules are kept in place so that unauthorised access to the network is not happening. The third is Abstract Services. Abstract Services basically just provides you with the management layer for the users, and we just have certain endpoints through which we can interact with it. As an example, two or three SMS can suffice. So if you work with S3, you do not have access to the backend operating system. All of those are managed by AWS. So, for Abstract Services, various factors such as the operating system and the backend application itself must be considered.

So you do not really have control over the S-3 application. So you have the operating system, the application, the backup, and the encryption taken care of by AWS. However, you need to take care of the data as well as the IAM part. So let’s say you have generated an accident secret key that gives full access to the S-3 service and that the S-3 accident secret keys are lost. Then it becomes the responsibility of the user and of AWS. So, keep in mind that security responsibility is always shared in the cloud computing model. So the security responsibility is shared between you and your hosting provider. Your hosting provider can now be AWS, Azure, Google, Cloud, and so on.

Now, the hosting provider has to ensure the security of the cloud, and you have to ensure security in the cloud. So, once again, it depends in the cloud. So let’s say you are using the infrastructure as a service. Now you have to take care of the operating system. You have to take care that your application does not really have any security loopholes. Or if your cloud provider is like AWS, it gives you the security group. You need to make sure that the security group is properly configured. If it is not configured and if your server is hacked, then it is the responsibility of your organisation as well. However, the security group itself—the security group again—is the application. So applications cannot go down, and that is the responsibility of the hosting provider. Now, if both entities are doing their jobs well, we can ensure that our systems are safe and secure.

2. Understanding principle of least privilege

Hey everyone, and welcome back to the Knowledge Portal video series. Today we will be speaking about the principle of least privilege, which is one of the most important principles, specifically if you’re working as a solutions architect or a security engineer. So let’s go and understand what this principle means. So the principle of least privilege is, by definition, a practise of limiting access to a minimal level that will allow normal functioning for the entity that is requesting the access. So as far as the system administrators are concerned, users should only have access to the data hardware that they need to be able to perform the associated duties. So if a developer wants access to a specific log file, he should only have access to that log file and not be able to do other things through which he can gain additional information that he is not authorised to get. And many times, when a developer or someone requests access, the system administrator or even a solutions architect blindly grants access with no restrictions, resulting in a slew of breaches within the organization.

So let’s take a sample use case where Alice is an intern who has joined your organisation as an intern system administrator. Now, since your infrastructure is hosted in AWS, you need to give Alice access to the AWS console, and the question is: what kind of access will you give? Now there are three options over here. The first option suggests that you share your AWS root credentials. Definitely, this option is also something that will fulfil the use case. The second option is to create a new user named Alice with full permissions for everything. Now, if you compare the first option and the second option, you might find that the second option is much better. Because if you use shared credentials, even if you have the same username, and if you give Alice the root username as well, it is difficult to track what you or Alice are doing within your organization. So tracking individual users is very important. Now if we go into the third step over here, which states to create a new user named Alice with read-only access, out of the three, the third one suffices. So you need to understand that whenever someone asks you for access, you should only give him the access that is required. Now I’ll give you one of the examples because I used to work in a payments organisation where security was considered a top priority.

So anytime a developer would request access to a specific server, he would have to create a ticket with his manager’s approval. So that was the first step. Second step: after managers are approved, we verify if the business justification is really there. If it is, then we used to ask him what command he wanted to run on the server and what log file he wanted to access. So he would say, “I want to access the application log file, and I will need three commands, which are less, tail, and more.” So we would only allow him three commands, and we would only allow him to access that specific log file. So it is very much so. Anyway, so we are going more off topic. Let me give you one of the similar examples so that we can see how exactly it would work. So, I have two terminals open over here, both of which belong to the test server. And what I will do is create a new user called “demo user.” And once the user is created, let me login to the demo user from the second terminal. Now, since I have just run the user as a command, I have not really done anything explicitly to provide him some kind of administrative privilege. So let’s look at what a normal user can do once he has access to a Linux system. So if I do, who am I? You say it’s showing a demo user. So a normal user without any explicit privileges can run the net stat command so he can actually see what ports are listening in on the server. So I see that there is port 1, there is port 80, there is port 22, and there are a lot of other ports that are also listening. Other than that, a normal user can also list what packages are installed on the server, along with the version number of each individual package. So if I just do rpm, hyper, QA, and grab NGINX,  you can see that it is actually showing me that NGINX is installed, and it is giving me the exact package version. And this is something that you do not really want others to note because if there are any vulnerabilities, it will make the life of a hacker quite easy. Then let’s try some different things. Let me go to etc. engineering.

I do LS. and if I do a “nanoengineerings connect,” you can see that a normal user can actually look into the configuration side of the web server as well. There are a lot of other things that a normal user can also do. For example, he can go into the boot directory, where the boot loader and your kernel files are stored. So this is, I would say, not a principle of least privilege being followed because even a normal user with basic access can actually do many things within the server, and this is something that you do not really want. So what I have done is create a simple script called Polp, which means the principle of least privilege. So let me run this specific script, and what it is doing is deescalating the privileges of the demo user. So let’s wait a few seconds, and if you see that minimum privilege has been applied, that’s perfect. So now let me try to go into boot, and you see that it is giving me permission denied. Let me try and go into etc. engineers, and let me know again if you see that permission is denied. So what we are doing is removing the privilege of the user, which was created to only allow him to do the things that he needs and not allow him to go around and see what other things are present in the server.

And this is what the principle of least privilege means. So going back to the PowerPoint presentation, we have one more use case where a software developer wants to access an application server to see the logs. Now that you’re a system administrator, you need to provide him access. Now the question is, how will you give him access? Now, this is a very generic use case. The options are to create a user with the add command and share the details. Now, this is something that we just demoed, and we can see what exactly a user can do if you just add a user with the user add command. So this is something that you don’t really need. The second step is to create a user with the user atcommand and add him to the list of sudoers. So a pseudos list is a file where a normal user can execute certain commands as an administrator user. And the third option here is to ask the developer which log file he wants to access, verify if his access is justified, and only allow him access to that specific log file and nothing else. And, of these three options, if you find the third option to be much more granular, this is the one you should choose. I’ll give you one of the real-world scenarios. This is specifically applied to the payment organisation where your audit happens. So every year when the auditor comes to your organization, he’ll ask you to show the list of users who have access to, say, AWS or even the Linux server. And if you show him the list, he’ll pick up some random user, and he’ll ask you as a system administrator to provide him the justification on why exactly he has access to that server and, if he has access, what kind of commands he can run on that specific server. So it is very difficult, and if you have not really created or if you’re not really designing the principle of least privilege, then you are really in trouble during the audit times.

3. Identity & Access Management

Hey everyone, and welcome to the Knowledge Portal video series. Today, we’ll talk a little bit about identity and access management in the Amazon Web Services environment. Now, basically, identity access management, also called IAM, allows us to control who can do what within your AWS account. So when it comes to the Linux environment, we have seen that if you want to create a new user, you make use of the user add command. Further user permissions can now be controlled using chmod, setfacl, geta PHL, and other extended control lists. Similarly, as far as AWS is concerned, if you want to add a new user, if you want to delete a new user, or if you want to give permission to that specific user, all of those things are done with the help of identity and access management.

One very important thing to remember about identity and access management is that security is denied by default. So whenever you create a new user in IAM, the new user will have zero privileges and zero permissions to do anything. This is in stark contrast to the user we created in the Linux environment, who was able to monitor the configuration, list the services that were running, and so on. So contrary to that, in AWS, whenever you create a new user, he will not be able to do anything. That means the permission is denied by default. So let’s go ahead and create our first user in AWS through identity and access management. So let me go to IAM over here, and if you see over here that the users are zero, that means that I have been logging in through the root account. In this security status again, if you see that our MFA is whitelisted, the next recommendation that it is making is to create individual IAM users. So let’s go ahead and create a user. I’ll add a user over here. Let me name this user Alice. This is very similar to the use case that we had discussed. The access type will be management console access, and let me go to the next. It is asking me if I want to attach some permissions to the user. Let me not do that for the time being. I’ll proceed by clicking “next” and then “Create User.”

So what it will show is the user, which is created along with the password, and this is the login link. So let me copy this, and let’s do one thing. Allow me to launch Internet Explorer, which I did not want to do at first, but I don’t have opera installed anyway. So let’s open up the link over here. Now it is asking me for my I.M. username and password. So the user name is Alice, and the password is the one that is created automatically. So I’ll enter my password and click sign in. Perfect. So now it is asking me to change my password. This is an important step because otherwise, it will not ask. Then the system administrator will know the password of every user. So here, the user can set his own password. So I’ll set up the password, and I’ll confirm the password change. Oops. It seems the password did not match. Let me try again. Perfect. So it is now logging me in, and I am inside the AWS environment. Now, we already discussed that security, as far as IAM is concerned, is denied by default. So everything is denied by default when a user is created. So let’s just verify. I’ll go to EC 2 over here, and you’ll notice that I can’t see anything. Let me try some different services. Let me go to line 3, and you see the access is denied, so it is not allowing the user to do anything. And this is one of the very nice things about IAM. So now, if we want to give access to Alice, we already discussed the use case where Alice is an internal system administrator and needs read-only access. So let’s go ahead and give access to the Alice user.

So I’ll click on the username over here, and in the permissions, I need to add a new permission. So I’ll click on “Add permission.” Now, there are certain policies that are created by default. These policies are like administrator access, and there are a lot of policies related to services that are present in AWS. For the time being, let me give Amazon easy-to-read-only access. I click on “Next.” And I’ll click on “Add permissions.” Perfect. So now, if you will see, the user has two policies attached. One example is an im user changing their password. This policy will allow the user to change the password, which we already did. The second option is to set Amazon EC to read-only access, which allows the user to read the EC on the console. So let me go back, and let me hit Refresh. So now if you see I’m able to read, or the earlier message of permission denied is no longer present, So if I go to the Oracon region where my instances are present, just to make sure everything is working as expected, I just go over here and you can see I can view everything, which is part of the easy to console. So now, going back to the PowerPoint presentation, AWS has a lot of services, and there are more than 50 services that are available, among which EC2 is a single service. So whenever you add a user, you should know to which service the user wants access to.Like when we created Alice, we only allowed her to have read-only access to the EC2 service. And, most importantly, all of the services Alice provides will be provided without permission. Like when a developer requests access to the management console, the first thing that you should ask is, “What service do you need access to?” And if the developer says, “Okay, I need access to the SQS service,” then you can only give him access to the SQS service and nothing else. And this is what the principle of least privilege basically means.

4. AWS CLI

Hey everyone, and welcome back to the Knowledge Portal video series. And today we will be speaking about the AWS command-line interface. So let’s get started. Now, before we go into the AWS CLI, let us understand or let us review the basics of the CLI. So, CLI stands for “command line interface,” and it is one of the ways of interacting with the system in the form of commands. CLI is now one of the quickest ways to perform repetitive and automated tasks. When you see this image, the first thing that comes to mind is Linux. Now, most of you, including myself, have spent your childhood working on the Windows system, and everything we do in Windows will be GUI-based. On the contrary, in Linux, most of the things that are done are through the CLI or through the command-line way of doing stuff. Now, the command-line way of doing things is quite simple, and it is much faster as well. So, in order to understand the difference, we’ll take a simple use case where there are four important steps, and we’ll compare how much time it really takes to do it in a GUI and CLI way.

As a result, the first step is to make a directory called “Test.” Notice the capital test. Inside this directory, create three text files. Name one text, two texts, and three texts. The content in each would be this iskplab’s demo and the fourth point, which we will avoid for the time being. So let’s go ahead and do the three steps and see how long it takes the GUI and how long it takes the CLI way. So let me do one thing. Let me do it the GUI way on the Windows machine. So I’ll create a folder called “test.” Now, I’ll create a text file called “as.” One TXT. I’ll say this is kplab’s demo. I’ll save this and create one more file, a second TXT; this is the Kplabs demo. This is the second file. The third file, which I’ll refer to as the third TXT, is the Kpops demo. So this is the GUI way of doing things. Now, what would happen if there were a hundred files and you ended up doing things manually? Now let’s do one thing. Let’s go to the Linux box. I have my Linux box up in here, and I have written a simple script called Demo Sh. So let’s start the demo. Sh. And, as you can see, it was completed in less than a second. And now, if you see, there is a folder called “Test.” Within this, there are three files, and each file contains the sentence, “This is KP Labs Demo.” Now, this way of doing things is quite fast.

As you can see, it only takes a fraction of a second. And if I copy this script to some other server and run it, it would run in the same manner. That means it can be repeated, and it can be automated. And this is one of the big benefits of CLI. Now, the same goes with AWS. Until now, we had been manually logging into the AWS console and creating the EC2 instances or S3 buckets. So all of that is the Gui way of doing it. And the GUI is almost always slow. However, there is always the CLIway, which is quite fast. And AWS also offers the CLI way of doing things. So when we talk about AWS CLI, you might have already guessed that AWS CLI is used for managing AWS resources from the terminal. Now, as the advantages of CLI say, it makes room for automation and makes things much faster. So, quite simply, let’s go ahead and understand and implement the AWS CRI. So the first thing you’d need is a username and password to log in with. Now, for CLI, you do not typically supply a username and password. Instead, you provide a similar distinction known as an access key and a secret key. So if you go to the security credentials over here, there is a field called access keys. Now, whenever you’re running AWS CLI, you’ll need the access key and secret key to be used instead of a username and password. So let’s do one thing. Let’s create an access key. And you see, it has provided me with access and a secret key. Now, for those who are interested in copying and pasting, I would really like to let you know that after this lecture, I will deactivate this key so you can try it out anyway. So, coming back to the topic, now that we have access and a secret key, the first thing that we will be doing is installing the AWS CLI. So let me just maximise the screen, and I’ll log in as root. Perfect. The first step is to download and install the CLI.

Now, one of the fastest ways to install the AWS CLI is through the pip command. Now, Pip generally does not come by default, so you have to do yum y install Python pip. This applies to Red Hat-based systems as well as Amazon Linux Fedora. So let’s wait for a minute for the Pip package to get installed. Perfect. So now that we have the Pip package installed, we will run Pip and install the AWS CLI. So Pip will install the AWS CLI package, and through the AWS CLI, we will be running commands that will connect to the AWS resources. Perfect. So now we have the AWS CLI installed. So if I just type AWS CLI, I’m out of luck. AWS. When you type AWS over here, you will find that the AWS CLI is working. So if I just do AWS health, it will give me all the options that are made available. Now, again, the first thing that you must do after you install AWS CLI is to configure your credentials. Now, since in the AWS CLI, your username and password will not work, you have to supply the AWS access key and the AWS secret key. So I’ll run the AWS configure command, and it will ask me for the AWS access key. Now, I will copy the access key that I have generated. I’ll just paste it for the secret key. I’ll have to copy this again and paste it in the default region. Again, this would be the default region in which you are creating the resource. In my case, it is US West’s default output name. You can just press Enter. And now the AWS CLI has configured the credentials.

Now, if you are wondering where it was configured, it was actually configured within the AWS credentials file. So this is where your credentials are configured. Now that we have the AWS CLI installed, let’s try it out and see if it really works. So that’s what I’ll do. AWS’s three LS And essentially, it is saying that the bucket operation is denied. Perfect. As a result, this occurs because the user does not actually have permission to access the buckets. Okay, I’ll just remove this policy. Perfect. And let me go again. I’ll run the same command again. AWS’s three LS And now, as you can see, I am actually able to see all the buckets. Now you will be able to do all the things that you have been doing in a GUI in a CLI way. You can create buckets, you can delete buckets, you can create instances—everything you will be able to do Now again, one of the advantages of runningAWS CLI is that it can be automated. And once you write your CLI script, you can do it in a repeated fashion.Now, one more thing that I really wanted to show you is the AWS CLI. There is extensive documentation for the AWS CLI commands if you simply type it. Now, each of these services, which are part of AWS, has an AWS CLI command. So let’s try three. So I’ll open up S-3 over here, and it will show me the commands related to the S-3 bucket. So all of these are the commands that are part of the SC bucket. If you go down, you will see the available commands, which are CP, LS, Move, and all those things. So I hope you got the basic concept related to what AWS CLI means. And in the upcoming lectures, whenever necessary, we will be using AWS CLI to automate a lot of things. So, I hope this has been informative for you. And again, I’ll encourage you to practise this once. And I look forward to seeing you at the next lecture. 

5. IAM Role

Hey everyone, and welcome back. So continuing our journey with identity and access management, today we will be speaking about the Im rules. So I hope you are able to understand and are finding this quite simple to grasp. Do feel free to connect or send your reviews because those are the motivating things that help me wake up every morning quite early. Anyway, coming back, let’s understand what the IAM role is. Now, until now, we have discussed that if a user wants to access a particular resource in AWS, we attach a policy to that specific user, and once that user logs in through the AWS console or through the AWS CLI, he will be able to do things according to the policies that are attached to the user. Now the question is: what happens if the server wants to do the same thing? Assume there is an EC2 instance on that server that wants to read the buckets in AWS S3.

So there are two major ways of doing things. Now, the first thing that you might be saying is that we can copy and paste the AWS access and secret key. We’ll install the AWS CLI on a server, and we’ll put in the access and secret keys, and we will be able to access the AWS resources depending upon the policy. That is one approach. The second way is through the IAM role. So let’s understand how it works with a simple use case. So the use case states there is a folder named backup within which the critical daily snapshot of the application data is stored. Now, as part of the backup process, you have to upload all the daily backups from the server to S3, and it says to design and implement this use case in a secure fashion. So basically, what is needed is an EC2 instance where the application is stored.

The EC2 instance must be able to upload the files to the AWS S3 instance in a secure fashion. So let’s go ahead and understand how we can achieve this specific use case. Now, I’m already connected to the EC2 instance. I’ll just show you. So this is the EC2 instance that is running, and I have logged in here. So if I just type AWS, you will find that the AWS CLI is preinstalled as far as Amazon Linux is concerned. So if I just do AWS S 3 LS, you will find that it is telling me to configure the credentials because they are not configured. Now in the earlier lecture we had configured the credentialsthrough AWS configure, but as far as the EC twoinstances are concerned, this is not a right way. Now, here’s why this isn’t the right way: let me show you. Let’s assume that this is the server, and there are a lot of other users who also have access to this specific server. We had now created the Alice user’s access and secret key. And we pasted it into the virtual machine that’s running over here. We already looked at the fact that whenever we do AWS configuration, the credentials are stored in plain text in the AWS credentials file.

So this is my access, and this is my secret key. Now if one more user has access to the root of the server, he will easily be able to run this command and get the access and secret key belonging to a specific user. Now this is one problem, and you will not be able to track as well. Okay? So if there are multiple users on the server who have root access, all of them will be able to open the file and take this data. Now let’s assume that if one of the system administrators leaves your organization, you will never even know whether he has stolen the access and secret keys and is secretly using them. And there are cases where this has happened. So this is something that you do not really want to do in the first place. So what is the alternative? The alternative is the Im role. Consider the Im role to be analogous to the Im user. The only difference is that my role gets attached to the EC 2 instance. So let me show you, and it will become much more clear once we do the practical. So, if I simply click on the instance over here, you’ll see that my role is none. That means there is no import role associated with it. So let’s do one thing. Let’s create our first Im role. I’ll create a role over here, and it is asking me what type of role I want to create. For our case, I want to create a role for the AWS service, and the service is easy too. In the use case, I’ll select the first and click on Next permissions. So now it is asking me to either attach the policy or create a policy.Now, this is very similar to how we used to create policies for the Im user that we had done in the previous lecture. So for our demo purpose, I’ll give S three read-only accesses.

I’ll click on “next.” Idly, you must give the role a name. I’ll give the role name as KP Labs, and you can click on “create a perfect role.” So the role is created. Now if you just click on this KP lapse, you’ll see that this is very similar to what an Im user looks like. Let’s also open the Im user. So you see in “Im User,” we also attach the policy. In the Im rule, we also attach the policy. The difference now is that because this is the EC2 base role, we can attach it to the EC2 instance that we had running. So now what I’ll do is right-click on this EC2 instance and go to instance settings, click on Attach, and replace I Enroll. And here you can give the role name, which is Kplabs. Now you might be wondering why the other roles were not disparaged. There are many roles that have been created, but this page only shows KP Labs. And the answer to this is that when you create a role, you see that the role can be created for individual service. And we have only one role called Kpops, which is there for the EC2 service. And it is for this reason that the specific role can be found in EC2. So I’ll select this role and click on “apply.” So now the role is attached to the instance. And if I click on the instance and go down, you see the IAM role is attached to the instance.

So now, all of the policies that are part of the role that we created will be implemented; let us return to the role. Now, all the policies that we write inside this rule will be applied to the EC2 instance that is attached to or connected to this specific role. So since we have the KP Labs EC2 instance, which is connected to this KP Labs role, and since the KP Labs role has AWS S3 read-only access, our instance should be able to have the same permission. So let’s go back, and if I run the command again, you will see that I am able to list the buckets that are present in the AWS s three.So this is the best way to do things. Remember one thing; I’ll just open up the PPT again, and we’ll go to the third slide. So remember one thing: a role contains a set of policies, and any entity that is connected to that role will have the same permissions mentioned in the role. Now again, a role can be used by the IAMuser AWS service as well as Sam’s provider, which we’ll be discussing in the relevant section. The last important thing that I would like you to remember is that if you’re working as a solutions architect or security engineer within your organization, do not allow the use of access and secret keys within the core or within any servers. I actually spent more than three months in my previous organization, which was a payments organization. We spent more than three months trying to remove the AWS access and secret keys because it was a big pain. It was actually shared with a lot of people who were outside the organisation as well. So just remember: never ever use access or secret keys. as far as the EC2 and AWS environments are concerned.

6. Compliance

Hi everyone, and welcome to the new compliance section. Now in today’s lecture, we will understand what is meant by “compliance,” and the second question is, “Why do we need compliance?” So as a typical workflow, let’s take a simple use case to understand both of these questions. So let’s assume that this is an e-commerce company very similar to Amazon. Generally, when you shop on Amazon, you will be asked for a credit card or a debit card. So, once you enter your debit and credit card information, they store that information on their servers. Now, this is a similar use case. Well, let’s assume that this is one of the e-commerce companies, and this company will ask you for a credit card or a debit card for purchasing an item. Now let’s assume a simple use case. Say, for example, that you are storing your personal credit or debit card on their servers.

So, how do you know your card is secure if you store it in their service? If it is an organization, they might actually download your card information and use it anytime, right? So how will you ensure that the company that is taking your card information is actually securely storing or transmitting it? So this is one of the very important questions that arises as far as the customer is concerned. So some customers might say, “Okay, I’m storing my credit card information; I have to visit your data center or your office and make sure that whatever security controls you are implementing are good enough.” So if it is a big organization, they will not allow hundreds of users to come into their data centres to make sure that the security controls are secure. So it becomes practically impossible to do that. So the question is: how can a company ensure users that their sensitive information is secured? And this is where compliance comes in. So there are regulatory compliance bodies, which are either formed by governments or an independent body, that define these types of policies and procedures that organisations must follow in order to make sure that the data that customers are saving is following industry-standard best practises for security. If you want to transmit or store credit card or debit card information, for example, you must comply with the PCI DSS. Now PCI DSS compliance is formed by an independent body called PCISSC, which in turn is formed by various entities like Visa, MasterCard, JCB, American Express, etc. A second example is HIPAA. Now, HIPAA is a government-based regulatory compliance programme created primarily by the United States government. The third is, for example, RBI PSS.

So this is formed by the Reserve Bank of India for the organisations that have a digital wallet. Now the question is, let’s say I’m launching an organization. Do I need to adhere to some kind of compliance? Is it mandatory? And the answer is that it really depends on your business. So depending on what kind of business you are doing, you might have to comply with one of the regulations. Let’s say, for example, that your organisation is storing the customer’s credit or debit card, similar to what Amazon does, where they store the credit or debit cards. So in that case, you need to have PCI DSS compliance. As a result, all large companies that accept credit card information, such as Amazon, eBay, Snap deal, and Big, Big Ecommerce Company, must be PCI DSS compliant. A second example is HIPAA. Now, HIPAA is a US government-created law that governs health-care data. So any organisation in the US that deals with healthcare data, like insurance, or data that basically talks about my health-related things, So these organisations must comply with HIPAA compliance. Another example is the RBI PSS. So this is specific to organisations that have a digital wallet in India. So any organisation launching or having a digital wallet must have RBI PSS compliance. Failure to comply can now result in legal action and severe penalties.

So if you are forming or if you are doing one of these activities and you are not complying, then you will have to face major penalties as well as legal action. So the compliance that we discussed in this example section is some of the complaints that deal with security. There are a lot of other complaints as well, and there are many more like ISO or SOX, which deal with security-related data. Now, because compliance is one of the most pressing issues confronting most organisations today, it means a lot to us as security professionals. Now, one of the major aims of compliance is to make sure that the information is secure. And that is one of the major aims. And this is one of the reasons why security professionals come into the picture. And many organisations are hiring compliance officers, who in turn hire various security professionals whose main aim is to ensure that the organisation is conforming to regulatory compliance on a regular basis. So this is why, generally, if you go to LinkedIn or your local job site and if you type, let’s say, PCI DSS, then you will find lots of organisations that are hiring people or people in the security profession who have lots of implementation or auditing knowledge about PCI DSS compliance. So that is the fundamental information about compliance and why it is required. Again, PCIDs are one of the very important compliance issues that a lot of organisations are having.

7. PCI DSS Compliance

Hi everyone and welcome to the compliance section. Now today we are going to talk about an extremely important compliance issue specifically for the organisation that deals with the customers’ card information, like a debit card or credit card. So this compliance is called “PCI DSS compliance.” So, let us examine this critical regulation. So again, just for PCI DSS, compliance must be implemented by all the entities involved in processing, storing, or transmitting the cardholder data. So, in terms of cardholder data, you have debit cards and debit credit cards.

So all of this is basically cardholder information. Now generally, there are various organisations that deal with these things, like the processing of value from a client to a merchant. So some of these are Visa, Mastercard, American Express,  Discover, and GCB, which is generally used in Japan. So there are various organisations that are involved here. So generally, credit card or debit card information is extremely sensitive information.So you have to make sure that your organisation is compliant with the best security controls that are available in the industry. And this is one of the reasons why various of these entities, like Visa and Mastercard, would not allow you to deal with that information if you didn’t have proper security controls in place. So earlier, what used to happen was that each organization, like Visa and Mastercard, had their own security control lists, and if your organisation were compliant with those lists, then they would allow you to deal with the data of customers. So, for example, Visa had something called a “cardholder information security program.” So if you want to deal with Visa-related cards, then you would have to be compliant with this particular program. Again, if you want to deal with Mastercard, you need to be compliant with their own program. Same with American Express, Discovery, and GCB. So for five of these entities, you would have to be compliant with each of these programmes individually, which was actually very tough.

So there were a lot of problems following each and every security control that was individual to each of these entities, both from a technical as well as a business point of view. So these companies decided to band together and launch a set of central security controls, or policies, which were then listed under a common platform known as PCI DSS. So then what used to happen was that if you were PCIDSS-compliant, you could deal with all of these entities. So you don’t have to worry about following every security control of individual entities; you just have to follow one thing, which is PCI DSS. So again, PCI DSS has a set of requirements. So if you see it, it is a requirement from one to twelve, which basically deals with firewall configuration. The third point is related to the third and fourth points, which relate to encryption. You also have antivirus software, log monitoring, and so on. So if your PCI DSS compliance is good, you can go ahead and store, transmit, and process the cardholder data. So this is the basic gist of PCIDSS compliance. So again, just for a gist, PCI DSS must be implemented by all the entities involved in processing, storing, or transmitting cardholder data. So if you have a company, let’s say you’re working as a security engineer, and your company comes up with a requirement that they want to store the cardholder data.

So, if you want to store cardholder data, or if your company wants to store cardholder data, you must achieve PCI DSS compliance. So let’s see the PCI DSS reference guide. This is the official reference guide from the PCI Council. So if I go a bit down, let me go a bit down. So, as you can see, these are the twelve requirements that your organisation must meet before you can achieve this level of compliance. And these are the entities that we were talking about: American Express, Discover, JCB, Mastercard, and Visa. So if you have a PCI DSS certification, you can deal with all of this information. Now, if you go a bit deeper, let’s say this is requirement three, which is protecting stored cardholder data. So this basically leads to: how do you encrypt the cardholder data? how do you hash the cardholder data? where is the encryption key stored? A lot of these things come under requirement three. And within requirement three, there are many sub-requirements to follow, such as 3132-334-3536, and the same is true for requirement six: 61626-3. So, once you have met all of these requirements, an external auditor will visit your organization, and these auditors will basically assess to see if you have actually met all of these requirements in a vulnerable manner.

Once the auditor confirms that you have the proper controls in place for each of these requirements, he will go ahead and formulate a report and give you an official certificate about PCI DSS compliance. Now, generally, I also have a nice little book I hope you can see. So this is the PCI DSS implementation workbook because we deal with multiple PCI DSS compliances within our organisation as we store the customer’s data as well as being a payment gateway. So I will really say this is an amazing certification and it will give you a huge amount of knowledge as far as security is concerned because mostly it deals with securing the customers’ card information. So it’s mostly about security. So, coming back to the topic, this is just about PCI DSS compliance. Now, one thing that I really wanted to let you know is that AWS really helps as far as PCI is concerned, because when you talk about all of those small, small points that are there in PCI DSS, they are sometimes very difficult to implement. And if you are using AWS, your life is going to be extremely simple. So the first requirement is to firewall at every connection. So you already have a security group attached to each EC2 instance. So this requirement is basically solved. Now there are various other requirements, like reviewing logs and security events, and this time you can use Elasticsearch Cloud Watch for protecting cardholder data. For log management, you can use KMS Cloud HSM, SCLAC, and so on. So a lot of AWS services will help you achieve a lot of your controls as far as PCI DSS is concerned. So, what is the basic understanding of PCIDSS compliance and why is it required? 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!