CompTIA SY0-701 Security+ Exam Dumps and Practice Test Questions Set 9 Q 161-180

Visit here for our full CompTIA SY0-701 exam dumps and practice test questions.

Question 161

Which of the following best describes a zero-day vulnerability?

A) A vulnerability that has been patched but not widely applied
B) A vulnerability unknown to the vendor and public that is actively exploited
C) A vulnerability used only in penetration testing
D) A vulnerability that exists in open-source software only

Answer: B

Explanation:

A zero-day vulnerability refers to a software flaw that is unknown to both the software vendor and the public, meaning there is no official patch or fix available at the time of discovery. These vulnerabilities are considered extremely dangerous because attackers can exploit them without any immediate countermeasure in place, making them a critical security concern. Zero-day vulnerabilities are often used by malicious actors in targeted attacks, such as advanced persistent threats (APTs), cyber espionage, or financial crimes. These attacks are often highly sophisticated and carefully planned, as the exploit is unknown to the vendor and has no patch to mitigate the risk.

The term “zero-day” comes from the fact that the vulnerability is discovered and actively exploited on day zero, before the software vendor or the public is aware of its existence. Once a zero-day is discovered, both the attacker and the vendor are on a race against time: the attacker seeks to exploit the vulnerability as quickly as possible, often for purposes such as unauthorized access, privilege escalation, or data exfiltration, while the vendor works to develop and release a patch to fix the vulnerability. The patch, however, may take time to develop, test, and distribute, leaving organizations vulnerable during this period.

Zero-day vulnerabilities are particularly dangerous because they are unknown and therefore unpatched, so attackers can take advantage of the vulnerability until a fix is issued. In contrast, patched vulnerabilities are vulnerabilities that have been identified, reported, and for which a patch has been developed and released by the vendor. Organizations can mitigate the risks associated with patched vulnerabilities by applying updates and security patches as soon as they are made available. In the case of zero-days, however, the absence of a patch means organizations must rely on more immediate, often complex, defensive strategies. These may include the use of intrusion detection systems (IDS), anomaly-based monitoring, and virtual patching, which acts as a temporary defense until an official patch is released.

While penetration testers may simulate zero-day conditions during security assessments to test the robustness of an organization’s defenses, zero-day vulnerabilities are not limited to such testing scenarios. In fact, zero-day exploits are often a major tool used by cybercriminals and nation-state actors in real-world attacks, where they can go undetected for extended periods. This is why zero-day vulnerabilities are a critical focus for cyber threat intelligence and incident response teams, who need to anticipate and react to them rapidly.

It is also important to note that open-source software is not immune to zero-day vulnerabilities. While open-source code is publicly available for inspection and modification, it can still contain zero-day flaws that are not immediately discovered or patched by the community. Therefore, zero-day vulnerabilities are not exclusive to any particular type of software—whether proprietary or open-source. All types of software, from commercial products to open-source projects, can be affected by zero-day vulnerabilities.

For security professionals, understanding zero-day vulnerabilities is vital because they represent a unique and significant risk to organizational security. Given the unpredictable nature of zero-day attacks, organizations must adopt proactive security measures, such as continuous monitoring for signs of abnormal behavior, leveraging threat intelligence to stay informed about emerging risks, and implementing layered security defenses. Layered defenses help minimize the potential for exploitation by combining different security technologies (e.g., firewalls, endpoint protection, and application security) and strategies (e.g., employee training, patch management policies, and incident response planning).

In conclusion, zero-day vulnerabilities highlight the need for constant vigilance and preparedness in cybersecurity. Since zero-days are typically exploited before a patch is available, security teams must deploy advanced defensive strategies to detect and mitigate these vulnerabilities before they can cause significant damage. Zero-day awareness also reinforces the importance of a security-first mindset that emphasizes proactive monitoring, rapid response, and adaptive security controls to stay one step ahead of attackers.

Question 162

Which of the following is the primary purpose of a security information and event management (SIEM) system?

A) To encrypt all incoming and outgoing traffic
B) To collect, analyze, and correlate security events from multiple sources
C) To perform penetration testing automatically
D) To block unauthorized access to network resources

Answer: B

Explanation:
A Security Information and Event Management (SIEM) system is a crucial component of modern cybersecurity frameworks, providing real-time analysis and management of security events within an IT infrastructure. The primary purpose of a SIEM system is to collect, aggregate, and analyze log and event data from a wide variety of sources, including firewalls, intrusion detection/prevention systems (IDS/IPS), servers, applications, endpoints, and network devices. This data can come from different types of security tools and IT systems, each generating log files that contain valuable information about the security state of the network.

Once the data is collected, the SIEM system uses advanced data correlation and analysis algorithms to identify patterns and anomalies that might indicate potential security incidents or breaches. This event correlation is key because it enables the SIEM to identify complex attacks that may span multiple devices or systems and go unnoticed if logs were examined individually. For example, an attacker might compromise an endpoint, escalate privileges, and later move laterally within the network—activities that, when isolated, could seem benign but, when correlated, indicate a coordinated attack.

SIEM systems help organizations detect a wide variety of threats, from malware infections and unauthorized access attempts to advanced persistent threats (APTs), insider threats, and other sophisticated cyberattacks. The system can identify suspicious patterns such as a user logging in at unusual hours or accessing sensitive data they do not typically interact with, allowing security teams to respond quickly before damage is done. Additionally, SIEM systems provide a centralized point of visibility, enabling security teams to monitor and investigate security events across the entire infrastructure in real-time.

However, it’s important to clarify what SIEM is and what it is not. While it plays a critical role in cybersecurity, SIEM does not directly secure network traffic. For example, encryption tools (like VPNs or TLS encryption) are designed to protect data as it travels over the network, ensuring confidentiality and integrity. In contrast, SIEM focuses on detecting, alerting, and providing insights based on log data and events, not securing the data itself. So, Option A (SIEM securing network traffic) is incorrect.

Similarly, while a SIEM system can provide valuable data and insights that inform penetration testing efforts, it does not actually conduct penetration testing itself. Penetration testing typically involves simulating real-world attacks to identify vulnerabilities in a system before attackers can exploit them. SIEM, on the other hand, helps identify patterns of behavior that might indicate an attack or breach, and it helps with the investigation and response after an event occurs. Thus, Option C (SIEM performing penetration testing) is incorrect.

Another common misconception is that SIEM systems can block attacks or prevent unauthorized access in real time. While a SIEM can alert administrators to suspicious activities such as brute force attacks, data exfiltration attempts, or unauthorized login attempts, it does not inherently have the ability to block these activities directly. For proactive defense, SIEM systems are often integrated with other security solutions like firewalls, intrusion prevention systems (IPS), or endpoint protection software, which can automatically block or mitigate threats. However, SIEM itself is primarily a tool for detection, not prevention, which makes Option D (SIEM blocking unauthorized access) incorrect.

For SIEM to be effective, it requires careful configuration, continuous tuning, and integration with incident response processes. False positives (alerts that indicate problems where there are none) can be a significant issue in SIEM systems, especially in complex environments with large volumes of data. Therefore, fine-tuning the rules and correlation algorithms to reflect the organization’s specific security needs and network behavior is essential. Additionally, SIEM should be integrated with incident response workflows so that alerts can be acted upon quickly, ideally within a security operations center (SOC).

A key benefit of SIEM is its ability to support compliance with various regulatory requirements such as GDPR, HIPAA, PCI-DSS, and SOC 2. By providing centralized logging, reporting, and audit trails, SIEM helps organizations track and document security events, demonstrating that they are meeting compliance standards and are proactively managing security risks. This is particularly important for industries where auditability and transparency are required by law or industry standards.

In summary, a SIEM system is an essential tool for any modern cybersecurity strategy. By collecting, analyzing, and correlating data from multiple sources, it helps security teams detect complex attacks, improve their response time, and meet regulatory requirements. However, it is not a silver bullet for all security needs—it requires proper configuration, integration with other security tools, and ongoing tuning to ensure that alerts are meaningful and actionable. Understanding what SIEM is capable of, and its limitations, is key for security professionals to make the most out of the system and strengthen the organization’s overall security posture.

Question 163

Which type of malware is specifically designed to replicate itself and spread across systems without user intervention?

A) Trojan
B) Worm
C) Spyware
D) Rootkit

Answer: B

Explanation:

A worm is a type of self-replicating malware that spreads autonomously across networks, exploiting vulnerabilities in operating systems, applications, or network protocols. Unlike other types of malware, such as Trojans, which rely on user interaction or social engineering to gain access to a system, worms do not require any direct user involvement. Instead, worms self-propagate by identifying and exploiting security weaknesses in the target systems, often moving quickly from one machine to another. This ability to spread independently makes worms particularly dangerous in large-scale network environments, where they can infect numerous systems within a short period of time, causing widespread disruption, data loss, and potentially financial damage.

Worms can propagate through several methods, including email attachments, network file sharing, or even unpatched vulnerabilities in software or operating systems. Once a worm successfully infects a machine, it typically seeks out other vulnerable machines within the network or over the internet to continue its spread. This autonomous behavior allows worms to infect large numbers of devices without the need for human intervention, making them capable of overwhelming network resources and causing significant damage.

One of the distinguishing features of worms is their ability to replicate and spread without relying on human actions. This is in contrast to other types of malware, such as Trojans. A Trojan is a type of malware that often masquerades as a legitimate program or file, tricking the user into installing it. It relies heavily on social engineering—such as email phishing or malicious downloads—to gain access to a system. Trojans do not have the self-replicating capabilities that worms possess, which makes them less effective at spreading autonomously. Therefore, Option C (Spyware) is incorrect, because while spyware often collects data from infected systems covertly, it does not have the self-replicating characteristic of a worm.

Rootkits, which are another type of malware, are designed to maintain privileged access to a system and often work to conceal malicious activities. While rootkits can be extremely dangerous because they allow attackers to maintain control over infected systems, they do not inherently spread by themselves in the same way worms do. Rootkits may be delivered by worms or other types of malware, but they are not self-replicating. Their primary purpose is to hide the existence of other malware or unauthorized access, rather than propagating across networks autonomously. This makes Option D (Rootkits) incorrect in the context of self-replicating malware.

Worms are not just standalone threats; they are often used as delivery mechanisms for other types of malicious payloads. For example, a worm may deliver ransomware, which encrypts a victim’s files and demands payment for decryption, or it might deploy keyloggers, which capture keystrokes and send them to the attacker. The worm itself, in these cases, is the vehicle that spreads and installs the more malicious components, amplifying its overall impact. The ability of worms to serve as vectors for other types of malware is one of the reasons why they are so dangerous.

To prevent worm infections, organizations must take a multi-layered approach to security. This includes patch management to fix known vulnerabilities, network segmentation to limit the spread of infections within the network, and the deployment of intrusion detection systems (IDS) to detect suspicious activity. Endpoint protection, such as antivirus software and firewalls, is also critical in detecting and blocking worm activity before it can spread. Additionally, organizations should regularly review and update their security protocols to ensure they are prepared for new and evolving worm threats.

For incident responders, understanding worm behavior is essential for quickly identifying infection patterns and developing containment strategies. As worms spread across networks, they may exhibit certain telltale signs, such as high network traffic or the rapid appearance of new infections across a range of systems. Early detection and rapid response are key to minimizing the damage caused by a worm outbreak. Isolating affected systems, applying security patches, and identifying and removing the worm from the network can help stop its spread and restore normal operations.

In conclusion, understanding the distinction between self-replicating worms and other types of malware is crucial for building comprehensive malware defense strategies and planning for network security. While worms can spread autonomously and cause significant damage, their propagation can be slowed or stopped with effective security measures like patching, segmentation, and monitoring. Being proactive in identifying and addressing potential vulnerabilities is the best defense against the widespread and rapid threats posed by worms.

Question 164

Which access control model restricts user permissions based on predefined roles within an organization? 

A) Discretionary Access Control (DAC)
B) Mandatory Access Control (MAC)
C) Role-Based Access Control (RBAC)
D) Attribute-Based Access Control (ABAC)

Answer: C

Explanation:

Role-Based Access Control (RBAC) is a widely used access control model that assigns permissions to users based on their roles within an organization. In RBAC, instead of assigning permissions to individual users, access rights are tied to specific roles, and users are assigned to these roles. This model simplifies the management of user permissions by grouping users according to their job functions or responsibilities. For example, in a typical enterprise setting, roles such as “Manager,” “HR Staff,” “IT Administrator,” or “Employee” might have different access levels to systems, data, and resources based on their respective duties within the organization.

RBAC ensures consistency in permission assignments and reduces administrative errors. Since users inherit permissions through their role, the need for manually setting individual permissions for each user is minimized. This is especially beneficial in larger organizations where managing permissions for hundreds or thousands of employees can become error-prone and time-consuming. The centralized control of permissions through roles ensures that users only have the access necessary for their tasks, helping to enforce the principle of least privilege—a key security best practice.

In contrast to Discretionary Access Control (DAC), where the owner of a resource has control over who can access it (making access decisions at the individual level), RBAC is organizationally driven rather than owner-driven. In DAC, resource owners have the discretion to set permissions for others, often resulting in less consistency and potential for human error. Therefore, Option A (Discretionary Access Control) is incorrect because RBAC assigns permissions based on roles, not individual ownership or discretion.

Mandatory Access Control (MAC), on the other hand, is a more rigid model where access to resources is governed by system-enforced policies rather than user-defined roles. These policies are set by system administrators and often include security classifications (e.g., “Top Secret,” “Confidential,” “Public”) that restrict access based on classification levels. MAC is commonly used in high-security environments where it is essential to strictly control access based on predefined rules. Since RBAC is focused on roles and does not typically involve the use of system-wide classification levels, Option B (Mandatory Access Control) is incorrect.

Another model is Attribute-Based Access Control (ABAC), which evaluates access decisions based on a combination of user attributes, resource attributes, and environmental conditions. For instance, ABAC might take into account the user’s department, the time of day, or the location of the access request, granting permissions dynamically based on these conditions. Unlike RBAC, which is centered on static roles, ABAC is more granular and flexible in its access control decisions. This makes Option D (Attribute-Based Access Control) incorrect, as it focuses on attributes rather than roles.

RBAC is particularly effective in enterprises with well-defined roles and hierarchical structures, where users’ duties and responsibilities are clearly defined. In such environments, it’s much easier to manage access rights by grouping users into roles that reflect their job functions. This reduces complexity and ensures that users receive the appropriate level of access for their work. Moreover, RBAC helps simplify compliance with regulations such as HIPAA, GDPR, and SOX because it provides a clear framework for controlling and auditing access. Organizations can easily track who has access to what resources and justify their access policies during audits.

Additionally, RBAC helps with auditing and monitoring access control. By assigning roles with predefined permissions, it becomes easier to track access patterns and detect potential security risks. In large organizations, manually managing individual user permissions can be extremely cumbersome and prone to error. RBAC streamlines this process by allowing system administrators to focus on maintaining and updating roles rather than managing individual user permissions, making it much more scalable and efficient.

Implementing RBAC effectively requires a well-thought-out role hierarchy and a clear understanding of organizational needs. It’s important to carefully define roles, as poorly defined roles could either grant excessive access or inadvertently restrict legitimate access. Security professionals must ensure that roles are mapped to appropriate access levels, taking into account business requirements and security policies. It is also important to regularly review and update roles to ensure they remain relevant as organizational structures and responsibilities evolve.

By understanding the capabilities and limitations of RBAC, security professionals can design access policies that strike the right balance between security and usability. This helps ensure that users have the right access to perform their tasks without unnecessarily exposing sensitive data or systems. Furthermore, implementing RBAC helps minimize the risk of privilege escalation—a security threat where an attacker or insider gains unauthorized access to more privileges than they should have. By tightly controlling roles and access, RBAC provides an effective means to prevent insider threats and reduce the overall attack surface.

In summary, Role-Based Access Control (RBAC) is a fundamental access control model that provides scalable, consistent, and manageable security by assigning permissions based on user roles. It is particularly effective in enterprises with well-defined roles and organizational structures. By understanding how RBAC works, organizations can enforce the principle of least privilege, ensure compliance with regulations, and improve their overall cybersecurity posture.

Question 165

What is the primary difference between symmetric and asymmetric encryption?

A) Symmetric uses one key, asymmetric uses two keys
B) Symmetric is slower than asymmetric
C) Symmetric can only encrypt text, asymmetric encrypts files
D) Symmetric uses hashing, asymmetric does not

Answer: A

Explanation:

The primary distinction between symmetric and asymmetric encryption lies in the number and type of keys used. Symmetric encryption uses a single secret key for both encryption and decryption, requiring that both sender and receiver securely share the key. It is generally faster and suitable for encrypting large volumes of data, but key distribution can be challenging. Asymmetric encryption uses a pair of keys: a public key for encryption and a private key for decryption. This eliminates the need to share a secret key and allows secure communication over unsecured channels. Asymmetric encryption is often slower due to computational complexity but enables key exchange, digital signatures, and authentication. Option B is incorrect because symmetric is typically faster than asymmetric. Option C is incorrect because symmetric encryption can secure any digital data, not just text. Option D is incorrect as hashing is a separate cryptographic function unrelated to the distinction between symmetric and asymmetric encryption. Understanding these differences is critical for designing secure systems, as hybrid approaches combining symmetric and asymmetric encryption often leverage the speed of symmetric encryption and the secure key distribution of asymmetric encryption, ensuring both efficiency and robust security.

Question 166

Which of the following is a primary purpose of a firewall in a network security architecture?

A) To encrypt all internal communications automatically
B) To filter traffic based on predefined rules and policies
C) To detect and remove malware on endpoints
D) To create VPN tunnels for remote users

Answer: B

Explanation:

A firewall is a fundamental component in network security, designed primarily to filter incoming and outgoing traffic according to a set of predefined rules and policies. Firewalls can be hardware-based, software-based, or a combination of both, and they serve as a critical barrier between trusted internal networks and untrusted external networks, such as the internet. By enforcing traffic rules, firewalls prevent unauthorized access, block malicious traffic, and reduce the risk of network intrusion. Firewalls can operate at different layers of the OSI model, including packet filtering at Layer 3, stateful inspection at Layer 4, and application-layer filtering at Layer 7. Option A is incorrect because encryption is not the main function of a firewall, although some advanced firewalls may support SSL/TLS inspection. Option C is inaccurate because malware detection is primarily handled by antivirus or endpoint detection systems, not by firewalls. Option D is partially correct in that some firewalls support VPNs, but creating VPN tunnels is not the firewall’s primary purpose; it is an auxiliary feature. Proper firewall configuration, including rule order, logging, and monitoring, is critical to ensure effective security. Firewalls are a first line of defense, forming part of a layered security approach that includes intrusion detection systems, endpoint security, and network segmentation. Understanding firewall types, deployment strategies, and traffic filtering capabilities helps security professionals design secure network architectures that mitigate a wide range of cyber threats.

Question 167

Which of the following best describes phishing attacks?

A) Attacks that exploit software vulnerabilities automatically
B) Social engineering attacks that attempt to obtain sensitive information through deception
C) Malware that encrypts files and demands a ransom
D) Unauthorized network scans for open ports

Answer: B

Explanation:

Phishing is a social engineering technique used to deceive individuals into revealing sensitive information such as usernames, passwords, or financial data. Attackers often disguise themselves as legitimate entities through emails, messages, or websites that appear authentic. Phishing attacks exploit human psychology rather than technical vulnerabilities, making them highly effective across various industries and demographics. Option A is incorrect because phishing does not primarily exploit software flaws. Option C refers to ransomware, which is a different threat vector. Option D describes reconnaissance activities, which may be part of a larger attack but are not phishing. Phishing techniques can include spear-phishing, where highly targeted messages are sent to specific individuals, or whaling, which targets high-level executives. Modern phishing campaigns may leverage social media, SMS, or voice calls, making detection more complex. Security awareness training, email filtering, multi-factor authentication, and incident reporting procedures are critical defenses against phishing. Organizations must continuously educate employees about common phishing indicators, such as suspicious links, unsolicited attachments, and urgent requests for sensitive data. By recognizing the characteristics of phishing, cybersecurity professionals can develop preventive strategies and minimize the risk of data breaches and credential compromise.

Question 168

Which type of attack attempts to overwhelm a service by sending a high volume of traffic to exhaust resources?

A) Man-in-the-middle
B) Denial of Service (DoS)
C) Cross-site scripting
D) SQL injection

Answer: B

Explanation:

A Denial of Service (DoS) attack is designed to overload a system, application, or network resource, rendering it unavailable to legitimate users. Attackers achieve this by sending a massive volume of traffic, consuming bandwidth, CPU, memory, or other critical resources. A DoS attack can target a single host or an entire network segment and often results in service disruption, financial loss, or reputational damage. Option A is incorrect because man-in-the-middle attacks intercept communication rather than overwhelm resources. Option C, cross-site scripting, is a web-based attack that injects malicious scripts into webpages, not a traffic-based disruption. Option D, SQL injection, exploits database vulnerabilities to manipulate or retrieve data but does not inherently cause a denial of service. Distributed Denial of Service (DDoS) attacks amplify the impact by using multiple compromised systems to generate traffic simultaneously, making mitigation more challenging. Techniques such as rate limiting, traffic filtering, intrusion prevention systems, and cloud-based DDoS protection services help reduce the risk and impact of DoS attacks. Understanding DoS attack vectors is crucial for security teams to implement proactive monitoring, incident response planning, and resilient network designs that maintain availability under adverse conditions.

Question 169

Which cryptographic principle ensures that a sender cannot deny sending a message?

A) Confidentiality
B) Integrity
C) Availability
D) Non-repudiation

Answer: D

Explanation:

Non-repudiation is a cryptographic principle that guarantees the authenticity of a message sender, preventing them from denying the transmission of a message. This is commonly achieved using digital signatures, public key infrastructure, and cryptographic hash functions. Non-repudiation is critical in scenarios such as electronic transactions, legal communications, and sensitive business correspondence. Option A is incorrect because confidentiality ensures that only authorized parties can access the data. Option B is incorrect because integrity ensures that data has not been altered or tampered with. Option C is inaccurate because availability guarantees that data and systems are accessible when needed. Non-repudiation enhances trust in electronic communications by providing verifiable evidence of origin and integrity. Effective implementation involves combining cryptographic mechanisms with procedural controls, such as key management policies, authentication protocols, and audit logs. Non-repudiation also supports regulatory compliance by ensuring accountability, traceability, and legal enforceability of digital transactions. Security professionals must understand the importance of non-repudiation to design systems that protect against fraud, unauthorized actions, and disputes over transaction authenticity.

Question 170

Which of the following attacks modifies network traffic in transit to intercept sensitive information?

A) Man-in-the-middle (MITM) attack
B) Cross-site request forgery (CSRF)
C) Brute-force attack
D) Rootkit installation

Answer: A

Explanation:

A man-in-the-middle (MITM) attack occurs when an attacker intercepts and potentially modifies communication between two parties without their knowledge. MITM attacks can target web traffic, emails, instant messages, or network sessions. Attackers may eavesdrop, inject malicious content, or steal credentials, sensitive data, or financial information. Option B is incorrect because CSRF tricks a user into performing unintended actions on a web application, rather than intercepting traffic. Option C refers to brute-force attacks that systematically guess passwords or keys and do not involve intercepting communication. Option D describes a rootkit, which hides malicious activity on a system, not network interception. MITM attacks can exploit unsecured networks, weak encryption, or poorly configured SSL/TLS implementations. Mitigation strategies include using encrypted communication channels, strong authentication mechanisms, certificate pinning, VPNs, and continuous network monitoring. Understanding MITM attacks is crucial for network security professionals, as these attacks can compromise confidentiality, integrity, and trust in digital communication. Properly configured cryptography and user awareness significantly reduce the risk and impact of MITM exploits.

Question 171

Which of the following best describes social engineering in cybersecurity?

A) Exploiting system vulnerabilities through scripts and exploits
B) Manipulating people to disclose confidential information
C) Deploying ransomware to encrypt user files
D) Performing port scans to map network infrastructure

Answer: B

Explanation:

Social engineering is a technique used to manipulate individuals into revealing confidential information, performing specific actions, or bypassing security protocols. Unlike technical attacks, social engineering exploits human psychology, trust, curiosity, or fear. Common methods include phishing emails, pretexting, baiting, tailgating, and impersonation. Option A is incorrect because it refers to exploiting technical vulnerabilities, not human factors. Option C involves ransomware, a malware category unrelated to direct manipulation of individuals. Option D describes network reconnaissance rather than human exploitation. Social engineering is highly effective because it targets the weakest link in cybersecurity: the human element. Awareness training, simulated phishing campaigns, strict verification protocols, and organizational security culture are vital defenses. Security professionals must recognize that social engineering can be combined with technical attacks, creating blended threats. For example, attackers may use social engineering to obtain credentials and then deploy malware for deeper network penetration. Understanding the principles, psychology, and tactics of social engineering enables organizations to implement proactive defenses, reducing the likelihood of compromise through human error or manipulation.

Question 172

Which type of malware hides its presence while maintaining persistent access to a system?

A) Worm
B) Rootkit
C) Trojan
D) Spyware

Answer: B

Explanation:

A rootkit is a type of malware designed to maintain privileged access while concealing its presence on a system. Rootkits can operate at the user or kernel level and may alter operating system functionality to hide files, processes, or network connections. This stealth capability allows attackers to maintain control over compromised systems for extended periods without detection. Option A is incorrect because worms focus on self-replication and spreading. Option C is incorrect because Trojans typically perform specific malicious functions but do not inherently hide their presence. Option D refers to spyware, which collects information but may not maintain persistent, hidden control. Rootkits are particularly dangerous because they undermine trust in system integrity and can evade traditional security tools such as antivirus software. Detection often requires advanced techniques like memory analysis, integrity checking, and behavioral monitoring. Preventing rootkit infections involves maintaining up-to-date systems, applying patches, enforcing least privilege access, and monitoring for unusual activity. Security professionals must understand rootkit behavior to implement effective detection, removal, and mitigation strategies, ensuring that systems remain secure and operational even under sophisticated attacks.

Question 173

Which protocol provides secure communication over an unsecured network, ensuring confidentiality and integrity?

A) HTTP
B) FTP
C) TLS
D) Telnet

Answer: C

Explanation:

Transport Layer Security (TLS) is a cryptographic protocol that ensures secure communication over unsecured networks. TLS provides confidentiality through encryption, integrity through hashing, and authentication using digital certificates. It is widely used to protect web traffic (HTTPS), email communications, and other network protocols. Option A, HTTP, is an insecure protocol without encryption. Option B, FTP, transmits data in cleartext unless secured with TLS (FTPS), making it inherently insecure on its own. Option D, Telnet, is an insecure protocol that transmits credentials and data in plaintext. TLS employs a handshake process to negotiate encryption algorithms, exchange keys, and verify certificates, creating a secure communication channel. Security professionals must understand TLS implementation, certificate management, and proper configuration to prevent vulnerabilities such as man-in-the-middle attacks or weak cipher suites. Ensuring the use of up-to-date TLS versions and strong cryptographic parameters is critical for protecting sensitive data and maintaining trust in digital communications.

Question 174

Which of the following security controls focuses on preventing the exploitation of vulnerabilities before an attack occurs?

A) Detective
B) Corrective
C) Preventive
D) Compensating

Answer: C

Explanation:

Preventive security controls are designed to stop security incidents before they occur by reducing vulnerabilities and mitigating potential threats. These controls include firewalls, access controls, antivirus software, encryption, security policies, and employee training. The primary goal is to prevent unauthorized access, malware infection, data breaches, and other incidents proactively. Option A, detective controls, focus on identifying and alerting administrators after an incident occurs. Option B, corrective controls, aim to remediate or repair damage caused by security incidents. Option D, compensating controls, provide alternative protections when primary controls cannot be implemented. Preventive controls are integral to a layered security strategy, often referred to as defense-in-depth, where multiple overlapping measures ensure that a failure in one control does not compromise the system. Security professionals must assess organizational risks, prioritize preventive measures, and continuously monitor control effectiveness to maintain a proactive cybersecurity posture. Implementing preventive controls significantly reduces the likelihood and impact of attacks, safeguarding critical assets and supporting compliance with regulatory frameworks.

Question 175

Which type of wireless attack involves capturing authentication handshakes to crack network passwords?

A) Rogue access point
B) Evil twin
C) WPA/WPA2 handshake capture
D) Jamming attack

Answer: C

Explanation:

WPA/WPA2 handshake capture attacks target wireless networks by intercepting the authentication handshake process between clients and access points. Attackers capture these handshake packets, which contain cryptographic material used to verify passwords. Once captured, offline brute-force or dictionary attacks can be employed to crack the network password. Option A, rogue access points, involve unauthorized devices pretending to be legitimate network nodes, but they do not capture handshakes specifically. Option B, evil twin attacks, are a subset of rogue AP attacks designed to trick users into connecting to a malicious AP. Option D, jamming attacks, disrupt wireless communication but do not capture authentication credentials. Preventing handshake capture requires strong WPA2/WPA3 encryption, complex passphrases, and monitoring for unauthorized access points. Security professionals must understand handshake attack techniques to secure wireless networks and protect against credential theft, data interception, and unauthorized access.

Question 176

Which authentication factor relies on something the user possesses?

A) Knowledge
B) Inherence
C) Possession
D) Location

Answer: C

Explanation:

Possession-based authentication factors rely on items that a user physically possesses, such as smart cards, security tokens, mobile devices, or hardware keys. These factors are part of multi-factor authentication (MFA) strategies that combine multiple independent factors to enhance security. Option A, knowledge, refers to something the user knows, such as a password or PIN. Option B, inherence, involves something intrinsic to the user, like biometric characteristics including fingerprints, iris scans, or voice recognition. Option D, location, can be considered an environmental or contextual factor but is not possession-based. Possession factors are particularly effective when combined with other factors to prevent unauthorized access if credentials are stolen or compromised. Security professionals must design authentication systems that balance usability and security, ensuring that possession-based factors are protected against loss, theft, or duplication, and integrated with robust incident response procedures to mitigate the risk of unauthorized access.

Question 177

Which of the following is a primary characteristic of ransomware?

A) Stealthy monitoring of system activity
B) Encrypting files and demanding payment for decryption
C) Automatically replicating across networks
D) Altering system logs to hide intrusions

Answer: B

Explanation:

Ransomware is a type of malicious software that encrypts the victim’s files or entire systems, rendering data inaccessible, and demands payment, often in cryptocurrency, for the decryption key. The primary objective is financial gain, and attacks may target individuals, businesses, or critical infrastructure. Option A, stealthy monitoring, is characteristic of spyware, not ransomware. Option C, automatic replication, describes worms, which spread without user interaction. Option D, altering system logs, is typical behavior of rootkits to conceal malicious activity. Ransomware attacks can enter systems through phishing emails, malicious downloads, or vulnerabilities in unpatched systems. Prevention measures include regular backups, patch management, endpoint protection, network segmentation, and user awareness training. Response strategies involve incident isolation, forensic analysis, and secure restoration of data. Understanding ransomware behavior helps cybersecurity professionals design proactive measures to minimize impact, protect sensitive data, and ensure business continuity.

Question 178

Which of the following attacks exploits vulnerabilities in web applications by injecting malicious SQL commands?

A) Cross-site scripting (XSS)
B) SQL injection (SQLi)
C) Session hijacking
D) Buffer overflow

Answer: B

Explanation:

SQL injection (SQLi) is a web application attack in which an attacker injects malicious SQL statements into input fields or URLs to manipulate the backend database. The attack can lead to unauthorized data access, modification, deletion, or even administrative control over the database. Option A, cross-site scripting, involves injecting malicious scripts into web pages to attack clients, not databases. Option C, session hijacking, targets active sessions to impersonate legitimate users. Option D, buffer overflow, exploits memory management vulnerabilities at the system level rather than the database level. SQLi attacks exploit poor input validation and insufficient parameterized queries, making secure coding practices, input sanitization, and prepared statements critical for mitigation. Security professionals must perform regular vulnerability assessments, penetration testing, and monitoring to identify and remediate SQLi risks. Awareness of SQL injection techniques enables organizations to protect sensitive information, comply with regulations, and maintain the integrity and confidentiality of data systems.

Question 179

Which type of policy outlines acceptable and unacceptable use of organizational resources?

A) Privacy policy
B) Acceptable Use Policy (AUP)
C) Business continuity plan
D) Incident response plan

Answer: B

Explanation:

An Acceptable Use Policy (AUP) defines the proper and improper use of organizational resources such as computers, networks, internet access, and email systems. It establishes clear guidelines for employees, contractors, and third parties, promoting security awareness and regulatory compliance. Option A, privacy policies, describe how personal data is collected, stored, and shared. Option C, business continuity plans, focus on maintaining operations during disruptions. Option D, incident response plans, outline procedures for addressing security incidents. AUPs help prevent misuse, unauthorized access, and legal liabilities by clearly communicating expectations. Effective AUPs are supported by technical controls, monitoring, training programs, and enforcement mechanisms. Security professionals must ensure AUPs are updated regularly to reflect evolving technology, threats, and regulatory requirements, balancing organizational security and employee productivity while mitigating risk.

Question 180

Which of the following is a key component of a layered security strategy?

A) Relying solely on antivirus software
B) Implementing multiple overlapping controls such as firewalls, IDS, and encryption
C) Using only strong passwords
D) Allowing unrestricted network access for convenience

Answer: B

Explanation:

A layered security strategy, also called defense-in-depth, employs multiple overlapping security controls to protect systems and data. This approach ensures that if one control fails, others provide continued protection. Key components include firewalls, intrusion detection and prevention systems, endpoint protection, encryption, access controls, monitoring, and employee training. Option A is insufficient because relying solely on antivirus software cannot address all attack vectors. Option C, using strong passwords, is essential but only one aspect of a comprehensive strategy. Option D undermines security and increases risk. Defense-in-depth balances preventative, detective, and corrective controls, combining technical, administrative, and physical measures to address a wide range of threats. By layering defenses, organizations reduce the likelihood and impact of security incidents, increase resilience, and support compliance with regulations and industry best practices.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!