Pass ISC CISSP Certification Exams in First Attempt Easily
Latest ISC CISSP Certification Exam Dumps, Practice Test Questions
Accurate & Verified Answers As Experienced in the Actual Test!
- Premium File 484 Questions & Answers
Last Update: Nov 27, 2023
- Training Course 62 Lectures
- Study Guide 2003 Pages
Check our Last Week Results!
Download Free ISC CISSP Practice Test, CISSP Exam Dumps Questions
Free VCE files for ISC CISSP certification practice test questions and answers are uploaded by real users who have taken the exam recently. Sign up today to download the latest ISC CISSP certification exam dumps.
ISC CISSP Certification Practice Test Questions, ISC CISSP Exam Dumps
Want to prepare by using ISC CISSP certification exam dumps. 100% actual ISC CISSP practice test questions and answers, study guide and training course from Exam-Labs provide a complete solution to pass. ISC CISSP exam dumps questions and answers in VCE Format make it convenient to experience the actual test before you take the real exam. Pass with ISC CISSP certification practice test questions and answers with Exam-Labs VCE files.
2. Lecture 2 - Common Policies and Key Principles
The first is known as the principle of least privilege. The principle of least privilege, also known as the principle of minimal privilege or the principle of least authority, requires that in a particular abstraction layer of a computing environment, every module, such as a process, agent, or program, depending on the subject, must be able to access only the information and resources that are necessary for its legitimate purposes. For example, the new hire in a marketing department should not have access to employee salary information that is on the HR database, as it is not required for the employee to carry out his or her duties. The principle of separation of duties, also known as segregation of duties, is the concept of having more than one person required to complete a task. This is a form of control intended to prevent fraud and error. For example, if an individual has the authority to create accounts in the system, then he or she should not have the ability to perform accounts payable transactions on the system. If this individual is granted permission to both, the company would be vulnerable to the individual setting up fictitious accounts and then paying those fictitious accounts, with the funds being redirected to the employee. A separation of duty would be effective in this case. For example, the employee may have the authority to set up the account, but only a supervisor may have the authority to set up a payment on the account. Mandatory vacations provide a similar control mechanism, with the intention being that any questionable actions may come to light when the employee is away on vacation. Finally, the practise of a job rotation serves a purpose similar to mandatory vacations. With job rotations, personnel and sensitive positions would not have the ability to cover up fraudulent activities as they changed periodically.
Business Continuity Planning
1. Lecture 1 - Business Continuity Planning
Welcome to Unit Five. In unit five, we will discuss business continuity planning. By the end of this unit, you will be able to define business continuity planning. You'll be able to describe the scope of business continuity planning. In addition, you will be able to define business impact assessments, and you'll be able to identify business continuity controls such as high availability, SPOF, and more. In addition, you will be able to compare and contrast Raid One and Raid Five technologies. Let's get started. Business continuity planning is the process of creating systems of prevention and recovery to deal with potential threats to a company. Any event that could negatively affect operations is included in the plan, such as damage to computing and network infrastructure. Business continuity planning is a subset of risk management. A business continuity plan should have a well-defined scope. Some of the factors that may be considered are business activities, systems, and controls. What business activities will be covered by the plan? What type of systems will it cover?
What type of controls will it cover? Asking these types of questions early allows business continuity planners to prioritise the necessary information to answer such questions. Continuity planners have various roles at their discretion, but one of the more common ones is a business impact assessment. A business impact assessment is a key part of the business continuity process that analyses mission-critical business functions and identifies and quantifies the impact the loss of those functions may have on the organization. A business impact assessment is critical in assessing the cost of business disruption and how disaster recovery plays a role in mitigating it. The business impact assessment has several crucial elements, which include executive backing, a deep understanding of the organization, and a business impact assessment. Tools, Processing, and Findings The business impact assessment yields a prioritised listing of risks that might disrupt the organization's business. It will look something like what I have displayed here. A planner would then use this information to select controls that may mitigate the risks facing the organization. The risks in this scenario are listed in descending order of expected loss. Based on this information, BIA planners can make decisions about control implementations that factor in cost.
2. Lecture 2 - Business Continuity Controls and Key Technical Concepts
Business continuity planning is integral to the availability of systems, and as stated earlier, availability is part of the CIA triad. It should therefore go without saying that in order to enhance business continuity for an organization, availability controls should be implemented. One way IT professionals do this is through the use of redundant systems in IT. The term "redundant" can refer to various situations. The term redundant can describe computer or network system components such as fans, hard disc drives, servers, operating systems, switches, and telecommunications links that are to backup primary resources in case they fail. Redundancy is commonly built into the network at a routing protocol level. They are configured so if one link goes down or gets congested, then traffic is routed over a different network link. SPOF, or single point of failure, is another business continuity control. A single point of failure is a part of a system that, if it fails, will stop the entire system from working. Single points of failure are undesirable in any system. A single point of failure poses a high level of risk to a network because if the device fails, a segment or even the entire network suffers. Some of the hardware devices that present single point of failure vulnerabilities are firewall routers, network access servers, switches, bridges, hubs, and authentication servers. Regular maintenance is the best defence against a single-point failure. Backup redundancy and FaultTolerance There are two key technical concepts that improve the availability of systems.
High availability, otherwise known as ASHO, is one of them. High availability is the concept or goal of ensuring your critical systems are always functioning. In practice, this means creating and managing the ability to automatically fail over to a secondary system if the main system goes down for any reason, as well as eliminating all single points of failure from your infrastructure. High availability is a strategy that requires careful planning and the use of tools. Having a well-thought-out high availability architecture is important because the cost of downtime is high, whether we're talking about dollar figures, the loss of staff productivity, or the life of a patient who becomes vulnerable to risk the very moment the hospital system goes down. The other key technical concept is fault tolerance. Fault tolerance describes a computer system or technology infrastructure that is designed in such a way that when one component fails, be it hardware or software, a backup component takes over operations immediately so that there is no loss of service.
The concept of having backup components in place is called redundancy, and the more backup components you have in place, the more tolerant your network is of hardware and software failures. For example, a single application running at the same time on two servers The servers essentially mirror each other so that when an instruction is executed on the primary server, it is also executed on the secondary server. If the primary server crashes or loses power, the secondary server takes over with zero downtime. There are two small drawbacks to fault tolerance. However, it is more costly because both servers are running all the time, and there is a risk of both servers going down if there's a problem with the operating system that the servers are using. As a security professional, once you've had a better understanding of the significance of the terms "high availability" and "fault tolerance," the next step would be to evaluate your current backup plan. Is your disaster recovery plan thorough enough? Is your high availability architecture including our failure options? How tolerant are your networks to failure? The answers to these questions will determine how prepared your organisation is for when the unexpected happens.
3. Lecture 3 - RAID Technology
Rate is short for "redundant array of independent disks." Originally, the term "rate" was defined as a redundant array of inexpensive disks, but now it is usually referred to as a redundant array of independent disks. Rate Storage uses multiple discs in order to provide fault tolerance, improve overall performance, and increase the storage capacity in the system. This is in contrast with older storage devices that use only a single disc drive. RAID allows you to store the same data redundantly in a balanced way to improve overall performance. Rate disc drives are common on servers but aren't usually required on personal computers. So, how does rating work? Rate technology, data can be mirrored on one or more discs in the same array, so that if one disc fails, the data is preserved thanks to a technique known as "stripping," a technique for spreading data over multiple disc drives. Grade also offers the option of reading or writing to more than one disc at the same time. In order to improve performance in this arrangement, sequential data is broken into segments, which are sent to the various discs in the array.
Speeding up throughput, a typical rate array uses multiple discs that appear to be on a single device, so it can provide more storage capacity than a single disk. Ray devices use many different architectures, called levels, depending on the desired balance between performance and fault tolerance. RAID levels describe how data is distributed across the drives. For the Sys exam, you will need to be familiar with raid level one and rate level five. Raid Level One is commonly referred to as "mirroring," where Rate Level Five is commonly referred to as "striping." Let's take a closer look at each raid, which consists of an exact copy or mirror of a set of data on two or more disks. A classic Raid One mirrored pair contains two disks. Disk configurations offer no parity, striping, or spanning of the disc space across multiple disks. Since the data is mirrored on all discs belonging to the array, the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity. Rate Five is a rate configuration that uses disc striping with parity. Because data and parity are stripes across all the disks, no single disc has a bottleneck. Striping also allows users to reconstruct data in the case of disc failure. Reads and writes are more evenly balanced in this configuration, making Raid Five the most commonly used rate method.
1. Lecture 1 - Risk Management
Welcome to unit six. In unit six, we will discuss risk management. By the end of this unit, you should be able to compare and contrast qualitative versus quantitative. Risk assessment should also be able to describe what risk assessment is and identify the differences between threats, risks, and vulnerabilities. You should further be able to identify the five risk management strategies, and you should have a thorough understanding of the various security controls available. Let's get started. As a security professional, you may be tasked with assessing risk. Risks can take many forms, from malicious attackers to security patches to malware. But how do we prioritise these risks? This is where risk assessment enters the equation. Risk assessment is the process of identifying and triaging the risks facing an organisation based upon the likelihood of their recurrence and the expected impact they will have on the organization's operations. Before we continue, however, we should provide some clarity. It is commonplace to use the terms risk, threat, and vulnerability interchangeably. The reality, however, is that these are three different concepts. A vulnerability is a weakness in the system that allows a threat source to compromise its security. It could be a software, hardware, or human weakness that can be exploited. A vulnerability may be a service running on a server, unpatched applications, an unrestricted wireless access point, an open port on a firewall, locks on physical security that allow anyone to enter their server room, or unenforced password management on servers and workstations. A threat is any potential danger that is associated with the exploitation of a vulnerability. If the threat is that someone will identify a specific vulnerability and use it against the company or individual, then the entity that takes advantage of the vulnerability is referred to as a threat agent. A threat agent could be an intruder accessing the network through a port on a firewall, a process accessing data in a way that violates security policy, or an employee circumventing controls in order to copy files to a medium that could expose confidential information. A "risk" is the likelihood of a threat source exploiting a vulnerability and the corresponding business impact. If a firewall has several ports open, there is a higher likelihood that an intruder will use one to access the network using an unauthorised method. If users are not educated on the process and procedures, there is a higher likelihood that an employee will make an unintentional mistake that may destroy data.
If an intrusion detection system is not implemented on a network, there is a higher likelihood that an attack will go unnoticed until it is too late. Risk ties the vulnerability, threat, and likelihood of an exploitation to the resulting business impact. If we were to visually describe this in a Venn diagram, you would see something that looks like this. You can see that risk is the area that crosses over between threat and vulnerability. While we are providing clarity on terminology, it is important that for the SysP exam you are also familiar with the term "threat vector." A threat vector is the route that a malicious attack may take to get past your defence and infect your network. The next step in the risk assessment process ranks those risks by two factors: likelihood and impact. The likelihood of a risk is the probability that it will actually occur. For example, there is a risk of hurricanes in Florida and Louisiana. When you look at the data, however, you find the probability of hurricanes in Florida is significantly higher than Louisiana. Therefore, as a security professional, you may be hypervigilant about the risk of hurricanes in Florida, whereas in Louisiana you may ignore it. The impact of a risk is the amount of damage that will occur if a risk materializes. For example, a hurricane may wipe out the data centre all together, whereas a minor flood may cause only some damage.
2. Lecture 2 - Risk Assessment Techniques
when performing risk assessments. We have two different techniques available to assess the likelihood and impact of a risk: qualitative techniques and quantitative techniques. Qualitative risk assessment is concerned with discovering the probability of a risk event occurring and the impact that the risk will have if it does occur. Qualitative techniques evaluate risks, likelihood, and impact via the use of a subjective rating. Qualitative techniques may include, for example, brainstorming interviewing, historical data, SWOT analysis, and risk rating scales. Quantitative techniques, on the other hand, use objective numeric ratings to evaluate the risk, likelihood, and impact displayed. We have an example of a qualitative risk assessment chart. When considering a specific risk, the assessor first rates the likelihood as low, medium, or high and then does the same for the impact. The chart then categorises the overall risk. Doso for a single-risk asset pairing performed by security professionals performing quantitative risk assessments As they conduct this assessment, they must first determine the values of several variables. The first of these variables is the asset value, or AV. The asset value is the dollar value of an asset. Risk assessors determining asset value have several options at their disposal. The original cost technique simply looks at invoices from an asset purchase and uses the purchase price to determine the asset value. The benefit of using this approach is that it is rather easy to use. It simply requires referencing invoices to determine the asset value. The downside to this approach is that the actual asset value may be drastically different than what's actually on the invoice if the asset value has changed over time. The depreciated cost approach reduces the value of an asset over time as it ages. The depreciation technique uses an estimate of the asset's useful life and then gradually decreases the asset's value until it reaches zero at the end of its projected lifespan. The replacement cost technique determines the actual cost of replacing an asset. Risk managers favour this approach because it produces results that most closely approximate the actual cost that an organisation will incur. If the risk materializes, it takes into consideration the current prices to determine the actual cost of replacing an asset in the current market and uses that cost as the asset value.
3. Lecture 3 - Quantitative Risk Factors
We previously discussed qualitative and quantitative risk factors. We also stated that the asset value is a factor that must be considered when conducting quantitative risk analysis. Another factor that must be considered is the exposure factor, or EF. The exposure factor is the percentage of loss that a real or perceived threat could cause for a specific asset. The exposure factor is a subjective value that the person assessing the risk must define. If the asset is completely lost, the exposure factor is 10. Another variable we must consider is the single loss expectancy, or SLE. The SLE is a dollar amount that is assigned to a single event that represents the company's potential loss amount if the specific threat were to take place.
To calculate the single loss expectancy, we simply multiply the exposure factor by the asset value. In other words, AV times FE equals SLE. So let's take a look at a real-world example. Let's suppose that we expect a hurricane to damage 25% of our data center. We would set the exposure factor to 25%. If the data centre is valued at $40 million or, in other words, has an AV of $40 million, we could calculate the single-loss expectancy for SLE by multiplying 40 million. In this case, our single-loss expectancy would be $10 million. Interpreted another way, this implies that a single hurricane would yield $10 million worth of damage to our data center. That's the impact of risk. The SLE, however, only gives us a sense of impact. As previously discussed, risk assessment must also consider the likelihood of a risk, and that is where ARO enters the equation, which we will discuss next. Now that we are familiar with the asset value and the exposure factor and are able to successfully combine them to determine our single loss expectancy, let's take a look at some other important risk factors to consider. The annualised rate of occurrence, or ARO, is the value that represents the estimated frequency of a specific threat taking place within a twelve-month time frame.
The range can be from zero, in other words, "never to 10," in other words, "once a year," to greater than one several times a year, and anywhere in between. For instance, if the probability of a fire taking place and damaging our data warehouses once every ten years is 0.1%, the ARo value is 0.1%. The annualised loss expectancy is the product of the annualised rate of occurrence and the single loss expectancy. A risk analysis should incorporate both likelihood and impact values. In this case, the impact of risk is expressed in terms of SLE, whereas the likelihood is expressed in terms of ARO. Multiplying SLA by Aro gives an annualised loss expectancy of $100,000. This means that we should expect to lose $100,000 each year. It's important to remember that in reality, this cost wouldn't occur each year. What really would have happened is that $10 million in damage would have occurred each time the event occurred. But since the occurrence happens only once every 100 years, it averages to only $100,000 a year, and that is quantitative risk In a nutshell, the SysExam will definitely require you to make similar calculations to best prepare yourself. Make sure you memories these formulas and understand what each acronym means. Once we've performed quantitative risk analysis, we can more accurately assess our ability to restore IT services and components quickly in the event of a failure. To do this, we need to consider two things. Specifically, we want to know if the asset is no repairable or if it's repairable. If it is a repairable asset, we consider the metric of mean time to failure, or MTTF. The "mean time to failure" is the amount of time we expect will pass before an asset fails. If the asset is repairable, then we can look at two different values. The first is the time between failures, or MTBF. The time between failures is simply the average amount of time that passes between the failures of a repairable asset. The second value we look at for repairable assets is the time until repair, or NTTR. The "meanwhile to repair" is the amount of time that an asset will be out of service for repair after it fails. If we consider both the MTR and MTTF values together, it provides us with a better understanding of the expected downtime for our services and components.
4. Lecture 4 - Risk Management Strategies
Risk in the context of security is the possibility of damage happening and the ramifications of such damage, should it occur. Risk management is the process of identifying and assessing risk, reducing it to an acceptable level, and ensuring that it remains at that level. There is no such thing as a secure environment. Every environment has vulnerabilities and threats. The skill is in identifying these threats, assessing the probability of them actually occurring and the damage they could cause, and then taking the right steps to reduce the overall level of risk in the environment to what the organisation identifies as acceptable. There are various risk management strategies. Let's take a look at five of these strategies. Risk avoidance is the elimination of hazards, activities, and exposures that can negatively affect an organization's assets. Avoiding risk implies that you'll actually have to change your organization's business practices so that you are no longer in a position where that risk can affect your business. For instance, let's assume our data centre was located in an earthquake-prone area. An example of risk avoidance would be to relocate the data center, in which case we would be avoiding the risk of earthquakes. As you can see, this decision implies that you will actually make a change to your business to avoid the risk.
Risk transference refers to the shifting of the burden of loss for a risk to another party through legislation, contract, insurance, or other means. For example, in our data centre located in an earthquake-prone area, we could transfer the risk by purchasing insurance for the datacenter that explicitly covers earthquake damage. The risk would be shifted, and the burden of any potential loss would go from the organisation onto the insurer.
Another approach is risk mitigation, where the risk is reduced to a level considered acceptable enough to continue conducting business. The implementation of firewalls, training, and intrusion detection and protection systems or other control types represent types of risk mitigation efforts. Another approach would be to accept the risk. Risk acceptance means the company understands the level of risk it is facing, as well as the potential cost of damage, and decides to just live with it and not implement a countermeasure. Many companies will accept the risk when the cost-benefit ratio indicates that the cost of a countermeasure outweighs the potential loss value.And finally, there is risk deterrence. Risk deterrence takes actions that dissuade a threat from exploiting a vulnerability. For example, if your data centre is located in a high crime area, one example of a deterrent would be a barbed wire fence. This would deter an individual and is an example of a deterrent.
5. Lecture 5 - Security Controls
Security controls are safeguards or countermeasures to avoid, detect, counteract, or minimise security risks to physical property, information, computer systems, or other sets. Sometimes these controls are designed to achieve the same control objective. This is known as the defence in depth principle. With defence in depth, control types are put into place to provide multiple security controls in a layered approach. For example, let's say we are trying to secure our network from a potential threat. With defence in depth, we can utilise multiple security controls to achieve the same objective, which is protection of the network. To do this, we will want to install firewalls and rule-based access control virus scanners.
We'd want to implement secure policies and procedures. We'd want to ensure that all systems are patched and up-to-date and that our VPNs and architecture are secure. In this example, we are utilising various layers via defence in depth to secure and achieve the same objective. Security controls can be categorised by group. Let's discuss grouping controls by their purpose, whether they're designed to prevent, detect, or correct security issues, and then we'll discuss them by their mechanism of action, or in other words, the way in which they work. Security controls can serve three purposes. A security control can be preventative. A preventive control is designed to prevent a security issue from occurring in the first place. Examples of preventative controls would include locks, implementing a batch system, utilising security guards, and/or utilising a biometric system. Another type of control would be detective control. Detective controls are aimed at identifying a potential security issue that has taken place. For example, an intrusion detection system can detect signs of breaches, and therefore it would be a detective control. Finally, we have corrective controls. A corrective control fixes components or systems after an incident has occurred.
For example, if a malicious attacker wipes out all the data, restoring the information from backup would be an example of corrective control. Controls can also be categorised by their mechanism of action. Controls can be technical, operational, or management controls. A technical control uses technology to achieve security control objectives. Operational controls use human-driven processes to manage technology in a secure manner. Management controls improve the security of the risk management process itself. When implementing controls, it is also important to familiarise yourself with control failure. There are two main ways that control can fail. One way would be a false positive error, and this occurs when a control triggers inadvertently when it should not. For example, the motion detection system in the data centre sets off the alarm when there really is no movement within the data center. This would be an example of a false positive. The system was inadvertently triggered when it should not have been. The second type of error is a false negative error, and this occurs when a control fails to trigger in a situation where it should. To go back to our previous example, think about an intruder entering the premises and the motion detection system failing to trigger. It would be a false-negative error in this case because the system should have triggered. but it is not. And this concludes the final unit of this course. I hope that you have found good use of the information provided in these lectures. If you are preparing for the CISP exam, the content of this course should suffice to prepare you for domain one of the SYS exam. I wish you the best of luck on your exam and in your future endeavors.
1. Lecture 1 : Data Security
Welcome to Unit One. In Unit One, we will cover data security. By the end of this unit, you should be able to list the three states of data. You should be able to define big data. You must be able to list the types of data policies. Furthermore, you should be able to identify the main roles in the Data Security Pyramid. And finally, you should be able to outline the ten components of the GAAP data privacy principles. Without further ado, let's get started. Let's jump right into it. Lecture One, Data Security In almost every case, the information at the core of our information system is the most valuable asset to a potential adversary. Information within a computer information system is represented as data. This information may be stored, known as "data at rest." It may be transported between parts of our system, known Data in Motion," or it may be actively used by the system itself, otherwise known as "Data in Use." Each one of these three states poses unique vulnerabilities. For example, if an insider copies data to a thumb drive and gives it to unauthorised parties, compromising its confidentiality, the data is vulnerable at rest. An example of data being vulnerable in motion would be data that is modified by an external actor, intercepted on the network, and then relayed with the altered version, thus compromising its integrity.
This is also known as the man in the middle attack. Data in use is vulnerable when it is deleted by a malicious process that exploits a time of check to time of use or race condition vulnerability, compromising its availability. If data is vulnerable, whether it's at rest, in motion, or in use, is there anything we can do to protect the data in any of these three states? Sure. One way to protect the data would be to have a clear set of policies and procedures surrounding the proper use of data and the security controls that must be in place for sensitive information. Second, we can also use encryption to protect the sensitive information, whether it's at rest or in transit. Finally, we can use access controls to restrict access to information while it's on a stored device. For instance, we can list specifically who may access, modify, or delete information stored on a device. It is up to you as a security professional to implement the controls that best suit your organisational needs. When it comes to securing data and the SYS exam, you should be familiar with the term "big data." Big Data is an evolving term that describes any voluminous amount of structured, semistructured, and unstructured data that has the potential to be mined for information. Big data is often characterised by the three vs. the extreme volume of data, the wide variety of data types, and the velocity at which the data must be processed. Big Data necessitates special security considerations, which must be taken into account when taking the CISP exam. Security professionals must think about how this information is secured and how it is accessed.
2. Lecture 2 : Data Security Policies
In lecture two, we will discuss data security policies. Security policies form the building blocks of the information security programme for the organisation as a whole. Due to the important nature of data security policies, there are certain criteria that they should meet. First, the data policy serves as the foundational authority for data security. It should be clear, so the expectations for data security are understood without question by all organisational members. Data security policies should also provide guidance with respect to access to information. Finally, it should provide a process for granting policy exceptions. One thing you should remember is that a policy needs to be technology- and solution-independent and must outline the goals and missions without tying the organisation to specific ways to accomplish them. Common security policies include acceptable use policy, risk management policy, vulnerability management policy, data protection policy, access control policy, business continuity policy, personal security policy, email policy, and incident response policy, to name a few of the many. A data classification policy describes the security levels of information used in an organisation and the process for assigning information to a particular classification level. In the context of information security, a classification of data is based on its level of sensitivity and impact. Data classification policies describe security levels.
Classifications may be assigned based on the sensitivity of the information and the criticality of the information to the enterprise. In a commercial business setting, for example, ranked from highest to lowest, the different classification policies may range anywhere from public classification to confidential. In the military, however, classification may be ranked from unclassified to sensitive, unclassified to confidential, secret, and top secret. We discussed security policies earlier, and I listed a number of them. Let's look at three of the more common ones found across most organisations data Storage Policies Appropriate storage locations indicate the level of encryption required as well as access control requirements. Data Transmission Policies: appropriate data transmission and provide encryption requirements as well. They also outline the acceptable transmission mechanism. Another type of data policy that is commonly found is a data lifecycle policy. A data lifecycle policy outlines the flow of an information system's data throughout its life cycle, from creation and initial storage to the time when it becomes obsolete and is deleted. A data lifecycle policy should address two things: how data is retained, in other words, the data retention policy, and how the data is disposed of, in other words, the disposal policy.
3. Lecture 3 : Data Security Roles
In lecture three, we will discuss data security roles. Information and data security require a collective effort among the various roles of responsibility in order to protect information. Most organisations today follow a three-tiered model for roles and responsibilities related to information security. This three-tiered model includes data owners, data stewards, and data custodians. At the top of the pyramid, we have data owners. The data owner or information owner is usually a member of management who is in charge of a specific business unit and who is ultimately responsible for the protection and use of the specific subset of information. The data owner has due care responsibilities and thus will be held responsible for any negligent act that results in the corruption or disclosure of the data. The data owner decides upon the classification of the data he or she is responsible for and alters that classification if the business need arises. The data owner is also responsible for ensuring that the necessary security controls are in place, defining security requirements per classification and backup requirements, approving any disclosure activities they're responsible for, ensuring that proper access rights are being used, and defining user access criteria. The data owner approves access requests or may choose to delegate this function to business unit managers, and the data owner will deal with security violations pertaining to the data he or she is responsible for protecting. Again, this is summarised here. Data owners are usually the business leaders who have responsibility for the mission area most closely related to the dataset, and an example would be that a VP of HR might be the data owner of all employee information. The data steward is a role within an organisation responsible for utilising an organization's data governance processes to ensure the fitness of data elements, both the content and the metadata. The data steward handles the implementation of the high-level policy set by the data owner. A data steward, for example, might make today's decisions about who has access to the data set. At the bottom of the pyramid is the data custodian. The data custodian or information custodian has responsibility for maintaining and protecting the data. This role is usually filled by the IT or Security Department, and the duties include implementing and maintaining security controls, performing regular backups of the data, periodically validating the integrity of the data, restoring data from backup media, retaining records of activity, and fulfilling the requirements specified in the company's security policy standards and guidelines that pertain to information security and data protection.
4. Lecture 4 : Data Privacy
In lecture four, we will discuss data privacy. Technological innovation and the power of data analytics create remarkable value but also present new challenges. Threats to the security and privacy of personal information continue to grow as the value of information has increased. The protection of personal information is of appropriate importance to many individuals and organisations around the world. Security professionals must always keep the protection of personal information and its privacy top of mind. We will now discuss the ten principles of the Generally Accepted Privacy Principles and how data governance programmes should help with the protection of personal information. The Generally Accepted Privacy Principles, or Gap, were developed through the collaboration of four major industry organizations: the American Institute of Certified Public Accountants, the Canadian Institute of Chartered Accountants, CICA, the Information Systems Auditor and Controller Association, and the Institute of Internal Auditors. The Gap Framework was previously known as the AICPA CICA Privacy Framework and is founded on a single privacy principle that is personally identifiable. Information must be collected, used, retained, and disclosed in compliance with the entity's privacy notice and with criteria set up in the Gap issued by the AICPA CICA. This privacy objective is supported by ten main principles and over 70 objectives with associated measurable criteria. Let's look at each of these principles. The first principle is management, and this states that an organisation handling private information should have policies, procedures, and a governance structure in place to protect the privacy of that information. The second principle is Notice, and this states that anyone who is the subject of records maintained by the organisation should receive notice of that fact, as well as access to the privacy policies and procedures followed by the organization. The third principle is the choice of consent. An organisation should inform data subjects of all their options regarding the data that they own and obtain consent from those individuals for the collection, storage, use, and sharing of their personal information. The fourth gap principle is collection. Organizations should only collect personal information for purposes disclosed in their privacy notices. The fifth principle is use, retention, and disposal. Organizations should only use that information for the disclosed purposes and not use it for other reasons because they already have the data. Additionally, the organisation should dispose of the data securely as soon as it is no longer needed for the disclosed purpose. Access is the Sixth Principle. Organizations should provide data subjects with the ability to review and update their personal information. The 7th principle is disclosure to third parties. The organisation should only share information with third parties if it is consistent with the purposes disclosed and privacy notices, and the organisation has the individual's consent to do so. The 8th principle is security. The organisations must secure private information against any authorised access. Data quality is the 9th principle, and this states that organisations should take reasonable steps to ensure that the personal information they maintain is accurate, complete, and relevant. Finally, the 10th principle is monitoring and enforcement. Organizations should have a programme in place to monitor compliance with their privacy policies and provide a dispute resolution mechanism. This rounds up the ten principles of GAAP.
Data Security Controls
1. Lecture 1 : Developing Security Baselines
Welcome to unit two. In unit two, we will discuss data security controls. By the end of this unit, you should be able to identify the use of security baseline as well. You should be able to list the three step approach to security baselines and list the pros and cons of professionally offered security standards. You should also be able to understand encryption, decryption and contrast the differences. Let’s get started. Developing Security Baseline Lecture One security professionals are tasked with maintaining the security of software and hardware components for the enterprise. These systems must be configured in such a manner that they meet security standards as set by security professionals and company policy. Doing this for each device individually can be a nearly impossible and daunting task. Enter into the equation security baselines provide enterprises with an effective way to specify the minimum standards for computing systems and efficiently apply those standards across the devices. Organizations generally begin standardization efforts by developing a baseline standard. Creating and maintaining your security baseline standards will be an ongoing process requiring the help and support of a number of departments within the organization. When developing security baselines, it's important to bear in mind that baseline security standards only describe the minimum requirements. It is by no means a comprehensive and complete set of requirements that are deployed across devices. Also, keep in mind that your baseline standards should be dynamically flexible. This means that the standards should generally be generic in nature. For example, stating that a company laptop should always be stored in room 105 may not be a good idea, as it probably will not stand the test of time. What if, in a few months, room 105 is utilized as a boardroom to hold all meetings? Rather, we want to rephrase our requirement to something that sounds like this: electronic devices should remain under the positive control of the authorized user. Remember, baseline standards set forth the minimum requirements that apply to every device in the enterprise. Having requirements that are too strict will be either impossible to follow or will require constant updates to the standards to the extent that it would be unfeasible to uphold the standard requirements. So let's quickly recap. Why are baselines so generic? For one, baselines are generic because they set forth the set of minimum requirements. These requirements must then be applied to every device in the enterprise. Finally, keep in mind that if a new device joins the network, the security professionals can simply turn to the security baseline to determine the generic control that should be enforced. Anything that is too specific would be unsustainable. Given the vast number of devices on the enterprise network. When developing baseline standards, it is often helpful to think of the three step approach. First, set the baseline requirements. Once the baselines are set, security professionals and the It department should deploy the baselines across the enterprise. Finally, the system should be monitored for compliance with the baseline. To summarize, we want to set the deploy and monitor.
2. Lecture 3 : Customizing Security Standards
Lecture Three customised security standards Now that we have identified security standards and possible sources of security standards We also stated previously that security standards can and should be customised to best meet your organisational needs. Security standards offered by professionals offer both pros and cons. One of the benefits of utilising a security standard offered by professionals is that it provides organisations with an excellent starting point. This is especially useful for small to medium organisations that lack the means or the technical know-how to develop their own security standards. However, one major drawback of utilising a security standard that is offered by professionals is that it still requires customization for each organization's security and business requirements. While security and customization are generally good and should be done, the drawback is that they require a lot of legwork. Let's take a look at a real-world example and how this might come into play. Let's assume that an industry standard suggests using full description to protect stored data on an end point and suggests the use of AES encryption with a 128, 192, or 256 bit key. However, the organisation might be under a more stringent compliance requirement that mandates the use of 256-bit keys and specifically prohibits the use of 128 or 192-bit keys. One possible solution would be to use the benchmark standard but modify it to require the use of a 256 bit key, removing the options to use a 128 or 182 bit alternative. So, how do we do this? Well, we can document the changes by writing a security standard that references another standard. For example, an organisation might say, "We want to adopt the Center for Internet Security Benchmark Standard for Windows Server 201 R2, dated April 28, 2016, with the following modifications One, we're going to change the requirement from one to two to set the password expiration period to 180 days instead of the standard 60-day expiration period. And, two, we're going to change requirement 1.2.2 to lock out accounts after five failed login attempts rather than the standard ten. In this manner, we're writing a security standard that references another standard. There are some key points to remember. Any changes that enterprises make to security standards, on the other hand, should be tied back to specific security and/or business requirements. In other words, they shouldn't just be done arbitrarily. Secondly, it should always document the specific reasons for the deviations.
3. Lecture 4 : Data Encryption
Lecture four: data encryption. We've all heard the term "encryption." But what is encryption? Encryption is a process of encoding a message or information in such a way that only authorised parties can access it and those who are not authorised cannot. It converts information from plaintext into encrypted ciphertext. But how does it do this? Encryption uses algorithms. An algorithm is the procedure that the encryption process follows. The specific algorithm is called the cypher or code. There are many types of encryption algorithms. The encryption's goal and level of security determine the most effective solution. TripleDES, RSA, and Blowfish are some examples of encryption algorithms or ciphers. If that's encryption, then what's decryption? Decryption is the process of taking encoded or encrypted text or other data and converting it back into text that you or the computer can read and understand. Decryption converts information from encrypted cypher text into plain text. So here's a very basic example of how this works. Let's say you send a message. Your phone gets the key from the app server and encrypts the message so only your friend can open it. Then your friend's phone receives the message and decrypts it using their personal key. Finally, your friend reads the message. That is a basic example of encryption and decryption. Another thing you should be familiar with is private key and public key encryption. And this is central to the concept of encryption. A private key or symmetric key means that the encryption and decryption keys are the same. The two parties must have the same key before they can achieve secure communication. So what's a public key? A public key means that the encryption key is published and available for anyone to use. Only the receiving party has access to the decryption key that enables them to read that message. Here is an example. Let's assume you are the sender of a plain text message. That plain text message is encrypted using the recipient's public key. Once that's been done, it's converted into cypher text. That ciphertext is then decrypted. It's decrypted using the recipient's private key. Upon completion, the decrypted message becomes plain text once again and is sent to the recipient. As you can see, there's a public key and a private key involved in this process, which you should be familiar with for the SIS exam. And this concludes the final unit of this course. I hope that you have found good use of the information provided in these lectures. If you are preparing for the CIS, the contents of this course should suffice to prepare you for domain two of the SIS exam. I wish you the best of luck on the exam and in your future endeavors.
So when looking for preparing, you need ISC CISSP certification exam dumps, practice test questions and answers, study guide and complete training course to study. Open in Avanset VCE Player & study in real exam environment. However, ISC CISSP exam practice test questions in VCE format are updated and checked by experts so that you can download ISC CISSP certification exam dumps in VCE format.
ISC CISSP Certification Exam Dumps, ISC CISSP Certification Practice Test Questions and Answers
Do you have questions about our ISC CISSP certification practice test questions and answers or any of our products? If you are not clear about our ISC CISSP certification exam dumps, you can read the FAQ below.
Purchase ISC CISSP Certification Training Products Individually