ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set2 Q21-40

Visit here for our full ISC CISSP exam dumps and practice test questions.

Question 21

Which of the following access control models is primarily concerned with protecting information based on the classification level of the data and the clearance of users?

Discretionary Access Control (DAC)
B. Role-Based Access Control (RBAC)
C. Mandatory Access Control (MAC)
D. Attribute-Based Access Control (ABAC)

Answer: C. Mandatory Access Control (MAC)

Explanation: 

Mandatory Access Control (MAC) is one of the most stringent and secure models of access control in information security. Unlike Discretionary Access Control (DAC), where data owners can decide who accesses their resources, MAC operates under a system-enforced policy framework that strictly governs access. Users cannot override or modify permissions; all access decisions are determined by pre-defined rules and security labels applied to both users and data objects. These labels typically include classifications such as “Confidential,” “Secret,” or “Top Secret” for data, and corresponding clearance levels for users. The system automatically enforces these rules, ensuring that users can only access information if their clearance level meets or exceeds the classification of the data.

A major advantage of MAC is its effectiveness in mitigating insider threats. Because users cannot grant themselves additional privileges or alter access permissions, the risk of unauthorized access is significantly reduced. This makes MAC particularly suitable for environments where data confidentiality is critical, such as military operations, intelligence agencies, and certain government organizations. For instance, a military officer with “Secret” clearance cannot access documents labeled “Top Secret,” regardless of rank or role. Similarly, a government employee may be restricted from viewing certain classified reports if their clearance level is insufficient.

MAC also supports multi-level security (MLS) models and is often combined with formal security policies like the Bell-LaPadula model, which enforces “no read up, no write down” rules to prevent data leakage. Its rigid enforcement of access controls ensures that sensitive information is protected not only from external threats but also from accidental or intentional insider misuse. While MAC can be less flexible than DAC or Role-Based Access Control (RBAC), its strength lies in providing predictable, auditable, and system-enforced protection for highly sensitive data, making it a cornerstone of security in high-assurance environments.

Question 22

In risk management, which step involves estimating the potential impact and likelihood of identified threats?

Risk avoidance
B. Risk assessment
C. Risk acceptance
D. Risk mitigation

Answer: B. Risk assessment

Explanation: 

Risk assessment is a fundamental component of the broader risk management process, forming the foundation for identifying, understanding, and mitigating risks within an organization. At its core, risk assessment involves the systematic identification of threats, vulnerabilities, and potential impacts that could negatively affect an organization’s operations, assets, or reputation. A threat represents any potential source of harm, whether natural, technical, or human-induced. Vulnerabilities are weaknesses in systems, processes, or practices that can be exploited by threats. Impact measures the potential consequences if a threat successfully exploits a vulnerability. By analyzing these three elements together, organizations can develop a comprehensive understanding of the risks they face and the magnitude of potential harm.

One of the primary objectives of risk assessment is to evaluate both the likelihood of each identified threat occurring and the severity of its potential consequences. This often involves a combination of qualitative and quantitative methods. Qualitative assessments categorize risks as high, medium, or low based on expert judgment, historical data, and scenario analysis, providing an accessible overview of the risk landscape. Quantitative approaches assign numerical probabilities and financial, operational, or reputational impact values, facilitating precise prioritization and cost-benefit analysis of mitigation strategies.

For example, a financial institution may assess the risk of a cybersecurity breach as highly probable and potentially catastrophic due to the exposure of sensitive customer data, regulatory penalties, and reputational damage. In contrast, the risk of a minor IT system outage might be less likely and have a lower operational impact. By combining both likelihood and impact, organizations can calculate risk scores, prioritize remediation efforts, and allocate resources efficiently.

Effective risk assessment also supports strategic decision-making by providing a structured basis for implementing controls, selecting risk treatment options, and monitoring residual risk. It ensures that organizations are not only reactive but also proactive in identifying emerging threats, addressing vulnerabilities, and maintaining resilience. Within the CISSP framework, mastering risk assessment is essential, as it underpins governance, compliance, business continuity, and security operations. A thorough and well-documented risk assessment process enables organizations to make informed decisions, justify security investments, and demonstrate due diligence to stakeholders and regulators.

Question 23

Which cryptographic technique ensures that a message has not been altered in transit and confirms the sender’s identity?

Symmetric encryption
B. Hashing
C. Digital signature
D. Key stretching

Answer: C. Digital signature

Explanation: 

Digital signatures leverage asymmetric cryptography to provide robust assurances of message integrity, authenticity, and non-repudiation. In this process, the sender uses their private key to generate a unique digital signature associated with a message or document. When the recipient receives the message, they use the sender’s corresponding public key to verify the signature. If the message has been altered in any way during transmission, the verification fails, immediately signaling tampering. This mechanism ensures that the message has not been modified in transit and confirms that it originated from the claimed sender, providing a reliable and cryptographically strong method of authentication.

Beyond integrity and authenticity, digital signatures provide non-repudiation, meaning the sender cannot later deny having sent the message. This is critical in contexts requiring accountability, legal enforceability, or regulatory compliance, where indisputable proof of message origin and integrity is necessary. Non-repudiation also strengthens audit trails, supports forensic investigations, and reinforces organizational trust in digital interactions.

Digital signatures are widely applied across a variety of domains. In secure communications, they protect emails, messaging systems, and network protocols from tampering or impersonation. In software distribution, they ensure that applications, patches, and updates have not been maliciously altered, helping prevent malware infections and ensuring user trust. In financial transactions and electronic contracts, digital signatures guarantee that instructions, payments, and agreements are authentic, verifiable, and legally binding. They are also integral to legal documentation, digital notarization, e-government services, and compliance frameworks, enabling organizations to meet regulatory requirements and maintain operational integrity.

Moreover, digital signatures are often combined with hashing algorithms and Public Key Infrastructure (PKI) systems to enhance security and scalability. Hashing ensures that even minor changes in the message content produce a distinct signature, while PKI provides mechanisms for key issuance, revocation, and verification. Within the CISSP framework, understanding digital signatures is essential not only for cryptographic implementations but also for broader concepts such as data integrity, authentication, non-repudiation, and secure communications. Proper use of digital signatures strengthens trust, accountability, and resilience across digital systems, forming a cornerstone of modern cybersecurity practices and governance.

Question 24

What type of malware is designed to appear as legitimate software to trick users into installing it?

Worm
B. Trojan horse
C. Ransomware
D. Rootkit

Answer: B. Trojan horse

Explanation: 

Trojan horses are malicious programs designed to masquerade as legitimate or useful software, tricking users into executing them. Once installed, Trojans can perform a variety of harmful actions, such as stealing sensitive data, creating backdoors for remote access, and giving attackers control over the compromised system. Unlike worms, Trojans do not self-replicate; they rely entirely on user interaction, including downloading infected files, opening malicious email attachments, or clicking deceptive links.

Trojans are often used as delivery mechanisms for additional malware, participate in botnets, deploy ransomware, or modify system settings to weaken defenses. Social engineering plays a central role in their success, with attackers exploiting user trust through fake software updates, enticing downloads, phishing messages, or deceptive prompts to persuade victims to install the malware.

Preventing Trojan infections requires a combination of technical controls and human-focused strategies. Robust endpoint protection, firewalls, intrusion prevention systems, and timely patching help reduce vulnerabilities, while careful software installation practices, strict application whitelisting, and network segmentation further limit exposure. Because Trojans exploit human behavior, user education, security awareness training, and simulated phishing exercises are equally critical in reducing susceptibility.

Regular system monitoring, logging, and anomaly detection complement these preventative measures by identifying potential Trojan activity early, allowing rapid response and mitigation. By addressing both technical vulnerabilities and human factors, organizations can significantly reduce the likelihood and impact of Trojan-based attacks, safeguarding sensitive information, maintaining system integrity, and supporting overall cybersecurity resilience.

Question 25

In the context of business continuity, which plan focuses on restoring IT systems and data after a disaster?

Business Impact Analysis (BIA)
B. Disaster Recovery Plan (DRP)
C. Incident Response Plan (IRP)
D. Continuity of Operations Plan (COOP)

Answer: B. Disaster Recovery Plan (DRP)

Explanation: 

A Disaster Recovery Plan (DRP) provides structured procedures to restore IT systems, applications, and data following disruptive events such as natural disasters, cyberattacks, hardware failures, or human errors. It details backup strategies, recovery priorities, hardware and software replacement instructions, and step-by-step procedures designed to minimize system downtime and data loss. DRPs are critical for ensuring that technology-dependent business functions can resume quickly, preserving operational continuity while mitigating financial, legal, and reputational risks.

While a Continuity of Operations Plan (COOP) focuses broadly on maintaining overall organizational operations during emergencies, the DRP specifically targets IT recovery and infrastructure resilience. Effective DRPs typically incorporate offsite backups, cloud-based recovery solutions, failover systems, and regular testing to verify their effectiveness and readiness.

Implementing a comprehensive DRP enables organizations to respond to crises in a coordinated and controlled manner, maintain stakeholder confidence, and meet regulatory requirements related to data protection and operational continuity. By integrating the DRP with broader business continuity and risk management strategies, organizations can identify critical systems, define recovery time objectives (RTOs) and recovery point objectives (RPOs), and continuously improve preparedness for future incidents. A well-designed DRP not only restores IT functionality but also strengthens long-term organizational resilience, ensuring that the enterprise can withstand and recover from unforeseen disruptions while maintaining trust and operational stability.

Question 26

Which of the following best describes the principle of least privilege?

Users have full administrative rights.
B. Users only have the minimum access necessary to perform their jobs.
C. Users are denied access unless explicitly authorized.
D. Users can escalate privileges temporarily.

Answer: B. Users only have the minimum access necessary to perform their jobs.

Explanation: 

The principle of least privilege restricts user access to only the resources, permissions, and functions necessary to perform their specific job responsibilities, ensuring that no individual, process, or application has more authority than required. By limiting privileges to the minimum necessary, this approach reduces the risk of both accidental and deliberate misuse of sensitive information, system resources, or critical applications. It also mitigates the potential impact of compromised accounts, insider threats, or malware exploiting excessive permissions, thereby lowering the organization’s overall exposure to security incidents and breaches.

Least privilege is a foundational concept in information security and is implemented across multiple domains, including access control, identity and access management (IAM), system administration, application configuration, and network management. By enforcing strict boundaries, organizations ensure that users, applications, and processes operate only within their assigned scope, preventing unauthorized access, reducing the attack surface, and limiting the risk of privilege escalation attacks.

Practical implementation of least privilege involves careful role definition, role-based access control (RBAC), periodic access reviews, auditing, and continuous monitoring to ensure that permissions remain aligned with current job functions. It also requires addressing challenges such as permission creep, where accumulated or outdated privileges inadvertently create vulnerabilities over time, particularly in complex enterprise environments. Automation tools, identity governance solutions, and policy enforcement mechanisms can help maintain consistency and reduce human error in managing privileges.

Within the CISSP framework, understanding and applying least privilege is essential not only for technical controls but also for governance, risk management, and compliance purposes. It supports secure system design, regulatory adherence, and overall organizational resilience by ensuring accountability, enforcing separation of duties, and minimizing exposure to risk across all layers of the IT infrastructure. Properly implemented least privilege forms a cornerstone of a robust security posture, promoting both operational efficiency and long-term protection of sensitive assets and information.

Question 27

During a penetration test, the tester is given limited information about the target network. Which type of test is this?

Black box
B. White box
C. Gray box
D. Red team

Answer: A. Black box

Explanation: 

Black box penetration testing simulates an external attacker who has no prior knowledge of an organization’s internal network, systems, or architecture. Testers start with zero insider information and must conduct the entire attack lifecycle themselves: reconnaissance (collecting open‑source intelligence and building an external footprint), scanning (discovering open ports and exposed services), enumeration (identifying targets, service versions, and misconfigurations), exploitation (attempting to gain access where possible), and post‑exploitation (privilege escalation, lateral movement, persistence and data exposure analysis). Because testers operate under the same uncertainty an outside adversary faces, black box tests provide a realistic assessment of perimeter defenses, external service hardening, and the organisation’s ability to detect and respond to real attacks.

Compared with other testing approaches, black box emphasizes discovery skills and the ability to chain together lower‑visibility findings into a credible attack path. White box testing (full knowledge) and gray box testing (partial knowledge) reduce the discovery phase by supplying internal details; they tend to be more efficient for finding deep or logic flaws and for validating secure‑by‑design controls. Black box, by contrast, is particularly valuable for evaluating how well external controls — firewalls, VPNs, web application firewalls, edge routers, DNS and public‑facing APIs — stand up to an attacker who must first find an entry point. It also tests real‑world detection: can security monitoring spot the reconnaissance and follow‑on actions?

Typical techniques and tool categories used in black box engagements include passive and active reconnaissance sources, network and web scanners, vulnerability enumeration tools, legitimate exploit frameworks for validation, and custom scripts. A skilled tester relies heavily on manual analysis and creative thinking to identify logic flaws, misconfigurations, and multi‑step attack chains that automated tools frequently miss. Importantly, black box testing focuses on risk discovery and demonstration rather than destructive exploitation: responsible engagements use rules of engagement, safe testing windows, and escalation controls to avoid service disruption.

The main benefits of black box testing are realism, a view of the external attack surface as a real adversary would see it, and the ability to validate detection and response processes. Its limitations include longer time-to-find due to lack of privileged information, potential blind spots for internal issues that require credentials or architecture diagrams to reveal, and higher cost for equivalent depth compared with white/gray box approaches. For comprehensive assurance, many organisations combine black, gray, and white box testing across application, network, and cloud environments.

Question 28

Which of the following attacks attempts to overwhelm a system’s resources, making services unavailable?

SQL Injection
B. Denial of Service (DoS)
C. Man-in-the-Middle (MITM)
D. Phishing

Answer: B. Denial of Service (DoS)

Explanation: 

Denial of Service (DoS) attacks are designed to disrupt the availability of systems, networks, or services by overwhelming them with excessive traffic, requests, or resource consumption. During such attacks, legitimate users are unable to access the targeted services, which can cause operational downtime, financial loss, and reputational damage. Distributed Denial of Service (DDoS) attacks are a more sophisticated variant, leveraging multiple compromised systems — often forming a botnet — to amplify the volume, scale, and complexity of the attack, making mitigation significantly more challenging.

DoS and DDoS attacks can target different layers of the network stack. Volumetric attacks aim to saturate bandwidth, preventing legitimate traffic from reaching the target. Protocol-based attacks, such as SYN floods or TCP connection table exhaustion, exploit weaknesses in communication protocols to consume server resources. Application-layer attacks, like HTTP floods, mimic legitimate user behavior to exhaust CPU, memory, or application resources, often bypassing traditional network defenses.

Organizations deploy a range of defenses to mitigate these threats. Firewalls, traffic filtering, and rate limiting can block or slow suspicious traffic. Additional measures include content delivery networks (CDNs) to distribute traffic load, DDoS scrubbing services to clean malicious traffic, anycast routing to distribute attack traffic across multiple locations, and upstream ISP filtering to absorb or redirect harmful traffic. Effective detection and response rely on continuous monitoring, anomaly detection, and predefined incident response playbooks that may include traffic blackholing, traffic shaping, or rapid escalation to mitigation providers.

Beyond technical controls, resilient architectures play a critical role. Capacity planning, network redundancy, failover systems, and geographically distributed infrastructure can reduce the impact of attacks. Regular tabletop exercises and simulations help organizations test incident response plans and ensure staff are prepared to act swiftly. DoS attacks are not only disruptive to operations and revenue but can also erode customer trust and may serve as a diversion for secondary attacks, such as data breaches or malware deployment. In the CISSP context, understanding the types, vectors, and mitigation strategies of DoS attacks is essential for designing resilient and secure systems that maintain availability under adverse conditions.

Question 29

Which security control type is designed to prevent an incident from occurring?

Detective
B. Corrective
C. Preventive
D. Compensating

Answer: C. Preventive

Explanation: 

Preventive controls are security measures designed to proactively stop incidents before they occur. They work by addressing vulnerabilities, enforcing policies, and limiting opportunities for unauthorized actions. Common examples include access controls, encryption, firewalls, intrusion prevention systems, authentication mechanisms, and formal security policies and procedures. By implementing preventive controls, organizations reduce the likelihood of attacks, data breaches, or misuse of systems and resources.

Unlike detective controls, which identify or alert on incidents after they occur, or corrective controls, which remediate issues post‑event, preventive controls form the first line of defense. They help establish strong barriers to unauthorized access, prevent exploitation of known vulnerabilities, and enforce organizational standards for safe behavior and system configuration. Effective preventive controls not only protect information assets but also support regulatory compliance, maintain operational continuity, and minimize the financial, reputational, and operational impacts of potential security incidents.

In practice, preventive controls should be layered and integrated with detective and corrective measures to create a comprehensive security posture. This layered approach, often referred to as defense in depth, ensures that even if one control fails, additional safeguards remain in place to prevent or limit harm. In the CISSP framework, understanding and implementing preventive controls is fundamental to risk management and overall security governance.

Question 30

What is the main purpose of multi-factor authentication (MFA)?

To encrypt user credentials
B. To require multiple methods to verify identity
C. To ensure users change passwords frequently
D. To restrict access by IP address

Answer: B. To require multiple methods to verify identity

Explanation: 

Physical security controls protect people, facilities, and critical assets from unauthorized access, damage, or interference. These controls encompass a wide range of measures, including physical barriers like locks, fences, and turnstiles; personnel-based protections such as security guards and receptionists; technological measures including biometric scanners, surveillance cameras, motion detectors, and access control systems; and procedural measures like visitor logs, identification badges, and escort policies. Physical security works hand-in-hand with technical controls (firewalls, encryption, intrusion detection systems) and administrative controls (policies, procedures, training) to create a comprehensive security posture that addresses threats across both human and technological domains.

Effective physical security reduces the risk of theft, tampering, sabotage, or espionage, and is critical for preventing attackers from bypassing logical security measures entirely. For example, an intruder gaining physical access to servers or network equipment can neutralize even the most sophisticated cybersecurity defenses. Physical security also plays a vital role in regulatory compliance, helping organizations meet requirements for data protection, privacy, and critical infrastructure security, while supporting disaster recovery planning and business continuity by ensuring that personnel, systems, and data remain protected during and after disruptive events.

Organizations typically adopt a layered or “defense-in-depth” approach to physical security, combining deterrence (fences, signage, lighting), detection (cameras, alarms, sensors), and response mechanisms (security personnel, automated lockdowns, emergency protocols). This approach often includes environmental considerations such as fire suppression, climate control, and protection against natural disasters, further enhancing resilience.

Within the CISSP framework, understanding physical security extends beyond just knowing the tools and devices—it requires comprehending how physical protections integrate with logical and administrative safeguards to form a cohesive security strategy. Professionals must evaluate risks, identify critical assets, implement appropriate controls, and continuously monitor and improve physical security measures. By doing so, organizations can maintain operational integrity, protect sensitive information, and ensure the safety of employees and visitors, while creating a robust, multi-layered defense that mitigates a broad spectrum of threats.

Question 31

Which of the following is considered a physical security control?

Biometric access system
B. Antivirus software
C. VPN encryption
D. Security awareness training

Answer: A. Biometric access system

Explanation: 

Physical security controls are measures designed to safeguard personnel, facilities, and critical organizational assets from unauthorized access, damage, theft, or sabotage. They encompass a wide range of mechanisms, including biometric scanners, access card systems, locks, fences, security guards, surveillance cameras, turnstiles, and barriers. These controls operate alongside technical (software- and network-based) and administrative (policy- and procedure-based) controls to form a comprehensive, multi-layered security posture.

The primary objectives of physical security are to deter potential intruders, detect unauthorized activity, delay or prevent access to sensitive areas, and protect assets from environmental hazards such as fire, flooding, earthquakes, or power failures. Effective physical security helps prevent theft of data or equipment, tampering with systems, and unauthorized interference with critical infrastructure. Without proper physical controls, attackers can bypass logical and technical defenses entirely, gaining direct access to servers, storage devices, workstations, or networking equipment.

In enterprise environments, physical security also supports operational continuity and regulatory compliance. For example, data centers, laboratories, and restricted-access offices often require layered controls that combine deterrence, detection, and response capabilities. Policies such as visitor logs, escort requirements, and secure disposal of sensitive materials further reinforce these measures.

Within the CISSP framework, understanding and implementing robust physical security is crucial. Organizations must assess risks, apply appropriate controls, and integrate physical security into broader security governance, risk management, and incident response strategies. By doing so, they not only protect people and tangible assets but also reinforce the integrity and availability of information systems, ensuring a holistic and resilient security posture.

Question 32

What is the primary goal of a vulnerability assessment?

Exploiting weaknesses in a system
B. Identifying and prioritizing system weaknesses
C. Creating intrusion detection rules
D. Responding to security incidents

Answer: B. Identifying and prioritizing system weaknesses

Explanation: 

Vulnerability assessments are systematic evaluations designed to identify, analyze, and prioritize weaknesses in networks, systems, applications, and configurations. Their primary goal is to uncover potential security gaps before they can be exploited by attackers, enabling organizations to proactively manage risks. These assessments typically involve automated scanning tools, configuration reviews, patch audits, and sometimes manual inspection to detect vulnerabilities such as outdated software, misconfigurations, weak passwords, or unpatched systems.

Unlike penetration testing, which actively exploits vulnerabilities to simulate real-world attacks, vulnerability assessments focus on detection and analysis without attempting to compromise systems. Once vulnerabilities are identified, they are ranked based on severity, likelihood of exploitation, and potential business impact. This risk-based prioritization helps organizations allocate resources efficiently to remediate the most critical issues first.

Conducting regular vulnerability assessments is a key component of proactive security management. It reduces the organization’s attack surface, strengthens defenses, and supports compliance with regulatory requirements. In addition, assessments provide actionable insights that inform patch management, system hardening, and security policy improvements. Within the CISSP framework, understanding the purpose, methodology, and distinction between vulnerability assessments and other testing activities, such as penetration testing or audits, is essential for maintaining a resilient and secure IT environment.

Question 33

Which security principle ensures that a user cannot deny actions they have performed?

Confidentiality
B. Integrity
C. Non-repudiation
D. Availability

Answer: C. Non-repudiation

Explanation: 

Non-repudiation is a security principle that ensures accountability by providing verifiable evidence that a specific user performed a particular action or transaction. This prevents individuals from denying their involvement, thereby supporting trust, transparency, and legal accountability. Common techniques for achieving non-repudiation include digital signatures, cryptographic authentication, secure logs, and detailed audit trails.

Digital signatures, for example, use asymmetric cryptography to link a user’s identity to a message or transaction, making it computationally infeasible to deny authorship. Secure logging and audit trails record actions in a tamper-evident manner, ensuring that all activities can be traced back to their originators. These mechanisms are essential for compliance with regulatory requirements, financial transactions, electronic contracts, and security investigations, where proof of action and integrity of records are critical.

By enforcing non-repudiation, organizations can maintain trust in digital communications, ensure accurate and reliable record-keeping, and provide verifiable evidence during audits or disputes. In the CISSP context, understanding non-repudiation is fundamental for designing secure systems, implementing accountability measures, and protecting both information integrity and organizational liability.

Question 34

What type of backup involves copying all data since the last full backup?

Full backup
B. Incremental backup
C. Differential backup
D. Mirror backup

Answer: C. Differential backup

Explanation:

Differential backups are a backup strategy designed to capture all data that has changed since the last full backup. Unlike incremental backups, which only store changes made since the previous incremental backup, differential backups accumulate all modifications since the last full backup. As a result, differential backups tend to grow larger over time, consuming more storage than incremental backups, but they significantly simplify the restoration process. To fully recover data, only the last full backup and the most recent differential backup are required, eliminating the need to process a long chain of incremental backups. This reduction in complexity makes differential backups particularly valuable for organizations that prioritize recovery speed and reliability.

Although they require more storage space than incremental backups, differential backups provide a practical balance between storage efficiency and recovery time. They ensure that all critical changes are captured and can be restored quickly in the event of hardware failure, software corruption, accidental deletion, or cyber incidents such as ransomware attacks. This capability helps organizations minimize operational downtime, maintain business continuity, and protect sensitive or mission-critical data.

Differential backups are also a key element of broader disaster recovery and business continuity planning. They complement full backups and can be scheduled strategically—daily, weekly, or at other intervals—to optimize both storage usage and recovery objectives. Within the CISSP framework, understanding the distinctions among full, incremental, and differential backups is essential for designing resilient backup strategies, defining recovery time objectives (RTOs) and recovery point objectives (RPOs), and ensuring organizations can reliably restore data and systems after disruptions. By incorporating differential backups effectively, organizations enhance their overall data protection posture, streamline recovery processes, and strengthen IT resilience against a wide range of operational and security risks.

Question 35

Which of the following is a primary goal of information security governance?

Protecting all endpoints equally
B. Aligning security strategy with business objectives
C. Eliminating all risks
D. Monitoring employee internet usage

Answer: B. Aligning security strategy with business objectives

Explanation: 

Information security governance is the framework through which an organization directs, manages, and monitors its information security activities to ensure alignment with business objectives, priorities, and regulatory requirements. It establishes the policies, procedures, roles, and responsibilities that guide security initiatives, ensuring they support organizational goals while balancing risk, compliance, and resource allocation.

The primary purpose of information security governance is to integrate security into the organization’s overall strategy and business processes. This includes setting risk tolerance levels, defining security objectives, and ensuring that investments in security controls deliver measurable value and mitigate risks effectively. Governance emphasizes accountability, oversight, and structured decision-making, helping organizations prioritize security initiatives based on potential impact, regulatory requirements, and operational needs.

Effective governance does not attempt to eliminate all risks, as this is neither practical nor cost-effective. Instead, it ensures that risks are understood, assessed, and managed appropriately, with controls and processes designed to reduce exposure to acceptable levels. It also supports strategic decision-making by providing management with clear information about risk posture, compliance status, and security performance.

Within the CISSP framework, understanding information security governance is critical because it forms the foundation for risk management, policy development, compliance programs, and organizational resilience. Strong governance ensures that security efforts are not isolated or reactive but are consistently aligned with business priorities, enabling the organization to protect assets, maintain trust, and achieve its strategic objectives.

Question 36

In cryptography, which algorithm type uses the same key for both encryption and decryption?

Asymmetric
B. Symmetric
C. Hashing
D. Digital signature

Answer: B. Symmetric

Explanation: 

Symmetric encryption is a cryptographic technique in which a single shared key is used for both encrypting and decrypting data. Because the same key performs both operations, symmetric encryption is computationally efficient and particularly well-suited for handling large volumes of data. It is commonly used to secure files, databases, communications, and storage media. Popular symmetric algorithms include AES (Advanced Encryption Standard), DES (Data Encryption Standard), and 3DES (Triple DES).

A critical aspect of symmetric encryption is key management. Both parties must securely exchange and store the shared key, as compromise of the key would allow an attacker to decrypt all protected information. This requirement distinguishes symmetric encryption from asymmetric (public-key) encryption, where a public-private key pair enables secure key exchange, digital signatures, and authentication without the need to share secret keys.

In practical implementations, symmetric encryption is often combined with asymmetric encryption in hybrid systems. In such setups, asymmetric encryption is used to securely transmit a session key, which then enables fast, bulk encryption using a symmetric algorithm. This approach leverages the strengths of both methods: the efficiency of symmetric encryption for large data volumes and the secure key exchange capabilities of asymmetric encryption.

Within the CISSP domain, understanding the strengths, limitations, and appropriate applications of symmetric encryption is essential. It underpins core security concepts such as confidentiality, data protection, and secure communications in both enterprise and cloud environments. Mastery of symmetric encryption also involves knowing how to integrate it with key management practices, hybrid encryption systems, and broader cryptographic frameworks to ensure comprehensive information security.

Question 37

Which term describes a network device that filters traffic based on predefined rules?

Router
B. Firewall
C. Switch
D. IDS

Answer: B. Firewall

Explanation: 

Firewalls are fundamental network security devices that monitor, filter, and control both incoming and outgoing network traffic according to a set of predefined security rules. Acting as a barrier between trusted internal networks and untrusted external networks—such as the internet—they help prevent unauthorized access, while allowing legitimate communications necessary for business operations. By regulating traffic flow, firewalls serve as a critical first line of defense in protecting an organization’s digital assets.

Firewalls operate at different layers of the network stack, each providing distinct functionality. Packet-filtering firewalls examine header information to allow or block traffic based on IP addresses, port numbers, and protocols. Stateful inspection firewalls go further by tracking the state of active sessions, ensuring that only valid packets associated with legitimate connections are permitted. Application-level firewalls, also known as proxy firewalls, inspect traffic at the application layer, enabling detailed control over specific services such as HTTP, SMTP, or FTP. This allows organizations to enforce security policies not just at the network level, but also at the service or application level, detecting potentially malicious payloads or protocol misuse.

By implementing firewalls, organizations enforce access policies, segment networks to contain potential threats, and prevent data exfiltration or unauthorized lateral movement within internal systems. Firewalls are most effective when integrated into a layered defense strategy, working in conjunction with intrusion detection and prevention systems (IDS/IPS), virtual private networks (VPNs), endpoint security solutions, and monitoring tools.

In the CISSP context, understanding the different types of firewalls, their capabilities, deployment strategies, and limitations is crucial. Effective firewall deployment involves proper placement in network topology, rule configuration, and ongoing management to balance security, performance, and operational requirements. Firewalls not only protect perimeter networks but also play a key role in enforcing segmentation, reducing the attack surface, and supporting an organization’s overall cybersecurity posture.

Question 38

Which disaster recovery strategy has the fastest recovery time but is also the most expensive?

Cold site
B. Warm site
C. Hot site
D. Mobile site

Answer: C. Hot site

Explanation:

A hot site is a fully equipped and operational backup facility that mirrors an organization’s primary site and can take over business operations almost immediately following a disaster. It typically includes servers, storage systems, applications, network connectivity, and up-to-date data backups, allowing continuity of critical operations with minimal downtime. Hot sites are designed to be ready-to-use, meaning that in the event of hardware failures, cyberattacks, natural disasters, or other disruptions, personnel can quickly switch operations to the hot site without significant interruption.

While hot sites provide the fastest recovery and highest availability, they also involve substantial costs, including hardware procurement, ongoing maintenance, software licensing, staffing, and regular testing to ensure readiness. Organizations with mission-critical operations—such as financial institutions, healthcare providers, or large-scale manufacturing—often invest in hot sites to maintain service continuity and meet stringent recovery time objectives (RTOs) and recovery point objectives (RPOs).

In the CISSP domain of business continuity and disaster recovery, understanding the differences between hot, warm, and cold sites is essential for planning recovery strategies that balance cost, downtime tolerance, and operational priorities. Properly managed hot sites allow organizations to respond rapidly to disruptions, minimize financial and reputational impact, and sustain critical business functions under adverse conditions.

Question 39

Which of the following best describes social engineering attacks?

Exploiting software vulnerabilities
B. Manipulating humans to reveal confidential information
C. Intercepting network traffic
D. Deploying malware through email attachments

Answer: B. Manipulating humans to reveal confidential information

Explanation: 

Social engineering attacks exploit human psychology, trust, and behavior to gain unauthorized access to systems, data, or physical locations. Rather than targeting technical vulnerabilities, these attacks manipulate individuals into performing actions or divulging information that circumvents security controls. Common social engineering techniques include phishing emails, pretexting (creating a fabricated scenario to extract information), baiting (offering something enticing to lure victims), tailgating (following someone into a restricted area), and impersonation of trusted personnel or entities.

Because social engineering relies on human error rather than system flaws, it can bypass technical defenses such as firewalls, intrusion detection systems, or antivirus software. Organizations counter these risks through comprehensive security awareness programs, user training, verification and authentication procedures, and strict security policies. Educating employees about common tactics, warning signs, and the potential consequences of falling victim is critical for reducing susceptibility.

Within the CISSP domain, understanding social engineering is essential for threat modeling, risk assessment, and implementing layered security controls. Effective mitigation combines technology, process, and people-focused strategies to create a security-conscious culture that limits the likelihood and impact of human-targeted attacks.

Question 40

In risk management, what is the best description of residual risk?

Risk eliminated after implementing controls
B. Risk that remains after controls are applied
C. The highest possible risk scenario
D. Risk that has been ignored

Answer: B. Risk that remains after controls are applied

Explanation: 

Residual risk is the level of risk that remains after an organization has implemented controls and mitigation measures to address identified threats and vulnerabilities. It reflects the portion of exposure that cannot be completely eliminated, acknowledging that no system, process, or security control can provide absolute protection. Residual risk encompasses uncertainties, limitations of implemented controls, human factors, and the ever-evolving threat landscape, all of which may still lead to potential loss, compromise, or operational disruption despite mitigation efforts.

Organizations assess residual risk to determine whether it falls within acceptable thresholds or whether additional risk treatment is required. This evaluation informs decisions on deploying extra safeguards, enhancing monitoring, transferring risk through mechanisms such as insurance, or formally accepting certain risks after weighing potential impact against cost and operational feasibility. Understanding residual risk allows organizations to allocate resources efficiently, prioritize remediation efforts, and develop risk response strategies that balance security, operational continuity, and business objectives.

Residual risk also plays a critical role in continuous risk management. It drives ongoing monitoring, incident response planning, and iterative improvements to security programs. Since risk environments constantly evolve due to technological changes, emerging threats, and shifting business priorities, residual risk assessment is a dynamic process rather than a one-time exercise. Within the CISSP framework, managing residual risk is central to effective security governance, helping organizations maintain resilience, meet compliance requirements, and make informed strategic decisions. By clearly identifying, quantifying, and addressing residual risk, organizations ensure that security measures are practical, effective, and aligned with business continuity goals, ultimately protecting assets, reputation, and stakeholder confidence.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!