CompTIA SecurityX CAS-005 Exam Dumps and Practice Test Questions Set 7 Q 121- 140

Visit here for our full CompTIA SecurityX CAS-005 exam dumps and practice test questions.

Question 121

Which type of attack exploits a vulnerability in an application to execute arbitrary code on a target system?

( A ) SQL Injection
( B ) Buffer Overflow
( C ) Cross-Site Scripting
( D ) Phishing

Answer: B

Explanation:

A buffer overflow attack is a type of security vulnerability that occurs when an application attempts to write more data to a memory buffer than it is designed to hold, causing the excess data to overwrite adjacent memory locations. This can have serious consequences, as it may allow an attacker to inject malicious code into the program’s memory and execute it, potentially gaining unauthorized control of the system. Buffer overflow attacks differ from other common cyberattacks, such as SQL injection, which manipulates database queries to extract or modify data; cross-site scripting, which targets web browsers to execute scripts in a victim’s session; or phishing, which relies on deceiving users into revealing sensitive information. Instead, buffer overflow exploits flaws in memory management and input handling within the software itself.

Attackers often leverage buffer overflow vulnerabilities to bypass security mechanisms, execute arbitrary commands, escalate privileges, or compromise the integrity and availability of a system. These attacks can be particularly dangerous in legacy software, embedded systems, or applications that lack adequate input validation and memory protections. A successful buffer overflow can result in system crashes, data corruption, or complete control over a vulnerable machine, which makes it a critical risk for organizations that rely on outdated or poorly secured applications.

Mitigating buffer overflow attacks requires a combination of secure development practices, system hardening, and runtime protections. Developers should implement input validation, bounds checking, and safe memory handling techniques to prevent unintended memory writes. Modern compilers and operating systems provide protections such as stack canaries, which detect overwrites on the stack; address space layout randomization (ASLR), which makes memory locations unpredictable; and data execution prevention (DEP), which prevents execution of code from non-executable memory regions. Regular patch management, code audits, and penetration testing further reduce exposure by identifying and remediating vulnerable software components. By combining proactive software engineering with layered runtime defenses, organizations can significantly reduce the risk posed by buffer overflow attacks while enhancing overall system security and reliability.

Question 122

Which security principle restricts user access rights to only the resources necessary for their role?

( A ) Separation of Duties
( B ) Principle of Least Privilege
( C ) Mandatory Access Control
( D ) Need-to-Know

Answer: B

Explanation:

The Principle of Least Privilege is a fundamental security concept that mandates that users, applications, and systems are granted only the minimum level of access required to perform their legitimate tasks. By limiting access to essential resources, this principle helps reduce the potential attack surface and limits the damage that could result from compromised accounts or malicious activity. Unlike Separation of Duties, which divides responsibilities among multiple individuals to prevent fraud, Mandatory Access Control, which enforces strict system-level security policies, or the Need-to-Know principle, which restricts access to sensitive information based on confidentiality, least privilege emphasizes operational access control and ensures that privileges are not excessive or unnecessary.

Implementing the principle of least privilege involves several key strategies. Role-based access control (RBAC) is commonly used, allowing access rights to be assigned to defined roles rather than individual users, ensuring consistency and simplifying administration. Regular access reviews and audits are critical to verify that privileges remain aligned with current job functions and that no unnecessary permissions persist. Temporary privilege escalation policies can also be implemented, allowing users to gain elevated access only for specific tasks and for limited time periods, after which privileges revert automatically. This approach reduces the window of opportunity for misuse or exploitation.

Question 123

Which method allows organizations to detect unauthorized changes or suspicious activity on systems and networks?

( A ) Intrusion Detection System
( B ) Firewall
( C ) Data Loss Prevention
( D ) Antivirus

Answer: A

Explanation:

An Intrusion Detection System (IDS) is a security technology designed to monitor network traffic, system activities, and application behavior to identify signs of unauthorized access, policy violations, or suspicious activity. Its primary purpose is to detect potential security incidents in real time, alert administrators, and provide actionable intelligence for response. Unlike firewalls, which control and restrict traffic flow based on predefined rules, data loss prevention systems, which focus on preventing sensitive information from leaving the organization, and antivirus solutions, which detect and remove malicious software, an IDS specializes in observing and analyzing activity to recognize both known and unknown threats.

IDS solutions are generally categorized into two types: signature-based and anomaly-based. Signature-based IDS uses a database of known attack patterns to identify malicious activity. This method is effective at detecting familiar threats but may struggle against new, unknown attacks. Anomaly-based IDS, on the other hand, establishes a baseline of normal behavior for systems, networks, or applications and triggers alerts when deviations occur. This approach can identify novel or previously unseen attacks, although it requires careful tuning to minimize false positives and false negatives.

Modern IDS platforms often integrate with Security Information and Event Management (SIEM) systems, enabling centralized log collection, correlation, and analysis. This integration enhances threat visibility, supports automated incident response workflows, and facilitates forensic investigations by providing detailed historical records of security events. IDS deployment is typically strategic, with sensors positioned at network perimeters, critical internal segments, or key servers to capture relevant traffic and monitor high-value assets effectively.

Question 124

Which authentication factor relies on something the user has, such as a token or smart card?

( A ) Knowledge Factor
( B ) Possession Factor
( C ) Inherence Factor
( D ) Location Factor

Answer: B

Explanation

Possession factors are a category of authentication mechanisms that rely on something a user physically owns to verify their identity. Common examples include smart cards, hardware tokens, security fobs, and mobile authentication devices such as apps that generate one-time passwords (OTPs). These factors differ fundamentally from knowledge-based authentication, which relies on information like passwords or PINs; inherence factors, which use unique biological traits such as fingerprints or facial recognition; and location factors, which verify identity based on the user’s geographic location. By requiring a physical item that the user possesses, possession factors add a tangible layer of security that complements other authentication methods.

Possession factors are particularly important in multi-factor authentication (MFA) systems, where they are combined with knowledge factors or inherence factors to create a more robust security posture. For example, a user may need to enter a password (something they know) and then confirm a one-time code generated by a hardware token (something they have). This combination significantly reduces the likelihood of unauthorized access because an attacker would need both the credential and the physical device to succeed.

The design of possession-based authentication mechanisms often incorporates features to enhance security and reliability. Time-based one-time passwords (TOTP) limit the validity of each code to a short window, reducing the risk of replay attacks. Hardware tokens or smart cards may include cryptographic keys that perform secure signing or encryption functions, providing additional verification beyond simple code generation. Organizations also implement strict procedures for issuing, revoking, and replacing tokens or devices to prevent misuse or theft.

Question 125

Which access control model allows users to make decisions about who can access their resources? 

( A ) Mandatory Access Control
( B ) Discretionary Access Control
( C ) Role-Based Access Control
( D ) Attribute-Based Access Control

Answer: B

Explanation:

Discretionary Access Control, commonly referred to as DAC, is an access control model that allows resource owners—such as users or administrators—to decide who can access their files, applications, or other resources. In this model, the owner of a resource has the authority to grant or restrict access to other users, making it distinct from other access control approaches. For instance, Mandatory Access Control (MAC) enforces system-defined policies based on security labels and classifications, leaving little room for user discretion. Role-Based Access Control (RBAC) assigns permissions based on defined organizational roles rather than individual ownership, while Attribute-Based Access Control (ABAC) evaluates access decisions dynamically based on user, resource, or environmental attributes. DAC stands out because it provides owners with flexibility and autonomy in managing their resources, which can enhance productivity and responsiveness in collaborative environments.

However, the flexibility of DAC can also introduce significant security challenges. Because users are responsible for granting access, there is potential for excessive privilege assignment, accidental exposure, or unauthorized sharing of sensitive resources. Mismanagement of access rights can lead to confidentiality breaches, data modification, or even system compromise. To mitigate these risks, organizations often combine DAC with monitoring and auditing practices. Regular reviews of access permissions, logging of changes, and user awareness programs are critical to ensuring that access remains appropriate and aligned with organizational security policies.

DAC is commonly implemented in file systems, collaboration platforms, and cloud storage solutions, where users need to share and manage data efficiently. For example, in a cloud environment, a user might grant colleagues read or write access to specific documents while keeping other files restricted. Organizations must enforce policies to prevent privilege escalation and ensure that resource sharing does not compromise critical systems. Additionally, integrating DAC with other security measures such as encryption, authentication, and network controls strengthens its effectiveness.

Question 126

Which type of malware replicates itself across systems without user intervention?

( A ) Virus
( B ) Worm
( C ) Trojan
( D ) Ransomware

Answer: B

Explanation:

A worm is a type of malicious software designed to replicate itself and spread across networks or systems autonomously, without any need for user intervention. This self-replicating behavior distinguishes worms from other forms of malware. For example, viruses require a host file and user action to execute, whereas worms can propagate independently once they exploit a vulnerability in a system. Trojans, on the other hand, disguise themselves as legitimate applications to trick users into installing them, and ransomware encrypts files to extort victims financially. Worms are unique in their ability to move rapidly across connected devices, often exploiting security weaknesses in operating systems, network services, or applications to infect multiple targets in a short period.

Once a worm infiltrates a network, it can consume significant bandwidth, slow down systems, or disrupt critical services, leading to operational outages. In addition to spreading, some worms carry malicious payloads that can steal sensitive data, install backdoors, or facilitate remote control by attackers. The speed and scale of worm propagation make them particularly dangerous, as infections can escalate quickly and affect large numbers of systems before detection or mitigation measures are applied. Historical examples such as the SQL Slammer worm and WannaCry ransomware worm demonstrate how self-replicating malware can cause widespread disruption across global networks, affecting businesses, government infrastructure, and personal devices.

Preventing worm infections requires a combination of technical and procedural safeguards. Timely software patching is crucial, as worms often exploit known vulnerabilities in unpatched systems. Network segmentation can help contain infections and prevent lateral movement within an organization. Intrusion detection and prevention systems can identify and block malicious traffic, while endpoint protection software can prevent initial compromise. User education also plays a role, as phishing emails or unsafe browsing practices are common initial vectors for some worms. By combining proactive patch management, network defenses, monitoring, and security awareness programs, organizations can reduce the risk and impact of worm infections. Ultimately, worms highlight the importance of maintaining strong cybersecurity hygiene to defend against fast-moving and self-propagating threats.

Question 127

Which type of backup ensures that all changes since the last full backup are saved?

( A ) Differential Backup
( B ) Incremental Backup
( C ) Full Backup
( D ) Mirror Backup

Answer: A

Explanation:

Differential backups are a type of backup strategy that captures all changes made to data since the last full backup. This approach provides an effective balance between the comprehensive coverage of full backups and the efficiency of incremental backups. Unlike incremental backups, which only record changes since the previous incremental backup and therefore require a chain of multiple backup files for restoration, differential backups require only the most recent full backup and the latest differential backup to fully restore the system. This significantly simplifies the recovery process while reducing the amount of storage needed compared to performing frequent full backups.

Full backups, by contrast, copy all data each time, offering straightforward recovery but consuming significant storage space and longer backup windows. Mirror backups create real-time replicas of data, providing immediate duplication but requiring substantial storage resources and offering little historical flexibility. Differential backups strike a middle ground by storing only the cumulative changes since the last full backup, reducing both the storage footprint and the complexity of the restoration process. Because of this, differential backups are particularly useful for organizations that need to maintain a reliable recovery point without overburdening their storage infrastructure or backup schedules.

In practice, organizations often implement a hybrid backup strategy, scheduling full backups periodically—such as weekly or monthly—and differential backups more frequently, such as daily. This ensures that recent changes are captured without performing time- and resource-intensive full backups every day. Additionally, testing the backup and recovery process is critical to verify data integrity, ensure that backups are complete, and confirm that recovery procedures are effective in case of system failures or data loss.

Differential backups also support regulatory compliance and business continuity by ensuring that critical operational data can be restored efficiently. By providing a clear and manageable recovery path, organizations can minimize downtime, reduce the impact of data loss incidents, and maintain operational resilience. This makes differential backups a key component of a comprehensive data protection and disaster recovery strategy, balancing efficiency, reliability, and storage optimization.

Question 128

Which cybersecurity framework is widely used to improve critical infrastructure security and risk management?

( A ) NIST Cybersecurity Framework
( B ) ISO 27001
( C ) COBIT
( D ) GDPR

Answer: A

Explanation:

The NIST Cybersecurity Framework (CSF) is a comprehensive set of guidelines designed to help organizations manage and reduce cybersecurity risks while improving overall security posture. Originally developed for critical infrastructure sectors, it has become widely adopted across industries due to its flexibility, risk-based approach, and emphasis on aligning cybersecurity efforts with organizational objectives. Unlike ISO 27001, which prescribes the requirements for establishing an information security management system, COBIT, which focuses on IT governance and management practices, or GDPR, which primarily addresses data privacy and compliance obligations, the NIST CSF provides a practical, adaptable framework that can be applied across organizations of varying sizes and sectors.

The framework is built around five core functions: Identify, Protect, Detect, Respond, and Recover. The Identify function involves understanding the organization’s systems, data, and risks to develop a risk management strategy tailored to business needs. Protect focuses on implementing safeguards, such as access controls, data encryption, and training programs, to limit the impact of potential threats. Detect emphasizes the timely identification of cybersecurity events through monitoring, threat intelligence, and anomaly detection. The Respond function provides guidance on how to take action when an incident occurs, including containment, communication, and mitigation strategies. Finally, Recover ensures that organizations can restore critical services and systems quickly while incorporating lessons learned to strengthen future resilience.

Implementing the NIST CSF encourages organizations to adopt a proactive and structured approach to cybersecurity. It promotes continuous monitoring, risk assessment, and integration with existing policies and compliance frameworks. Organizations can use the framework to benchmark current security capabilities, prioritize resource allocation, and create measurable improvement plans. By providing clear guidance for incident response, resilience, and recovery, the framework not only reduces the likelihood and impact of cyberattacks but also supports regulatory compliance and stakeholder confidence.

Overall, the NIST Cybersecurity Framework enables organizations to build a mature, resilient cybersecurity program that adapts to evolving threats. Its flexibility allows enterprises to tailor its functions to their unique risk profiles, ensuring a balance between operational efficiency, security effectiveness, and business continuity. Through its structured yet adaptable approach, NIST CSF serves as a foundational tool for strengthening cybersecurity posture and sustaining long-term organizational resilience.

Question 129

Which type of attack intercepts communication between two parties to steal or manipulate data?

( A ) Man-in-the-Middle
( B ) SQL Injection
( C ) Cross-Site Request Forgery
( D ) Brute Force

Answer: A

Explanation: 

Man-in-the-Middle (MitM) attacks are a type of cyberattack in which an attacker secretly intercepts, monitors, or alters communications between two parties without their knowledge. The attacker essentially positions themselves between the sender and the receiver, capturing sensitive information, manipulating messages, or impersonating one of the parties to gain unauthorized access. Unlike SQL Injection, which exploits vulnerabilities in databases, Cross-Site Request Forgery, which manipulates user-initiated web requests, or Brute Force attacks, which attempt to guess credentials, MitM attacks focus specifically on intercepting and potentially modifying data while it is in transit. This makes them particularly dangerous because they can compromise both confidentiality and integrity of communications, potentially leading to identity theft, financial loss, or unauthorized access to critical systems.

MitM attacks can take several forms. Packet sniffing allows attackers to capture unencrypted data as it travels over a network. ARP spoofing can redirect traffic within a local network to the attacker’s device, enabling interception and manipulation. Rogue Wi-Fi hotspots, often set up in public places, can trick users into connecting, allowing attackers to monitor traffic and steal sensitive information such as login credentials or session cookies. Attackers may also combine these techniques with social engineering to maximize their effectiveness.

Organizations and individuals can mitigate the risk of MitM attacks through several strategies. Using strong encryption protocols such as TLS or SSL ensures that intercepted communications cannot be easily read or altered. Secure Virtual Private Networks (VPNs) protect data transmitted over public networks by creating encrypted tunnels. Digital certificates and public key infrastructure (PKI) help verify the authenticity of communication endpoints, preventing impersonation. Multi-factor authentication adds an additional layer of protection even if credentials are intercepted. Educating users about the risks of public networks, phishing attempts, and untrusted Wi-Fi connections further reduces exposure.

Question 130

Which security measure prevents the disclosure of sensitive information during transmission?

( A ) Encryption
( B ) Hashing
( C ) Tokenization
( D ) Auditing

Answer: A

Explanation:

Encryption is a fundamental security mechanism designed to protect the confidentiality of data by converting readable information, or plaintext, into an encoded format known as ciphertext. This transformation ensures that sensitive information remains unintelligible to unauthorized users while still allowing authorized parties to decrypt and access the original data. Unlike hashing, which generates a fixed-length representation of data to verify integrity without allowing reversal, tokenization, which substitutes sensitive data with non-sensitive equivalents, or auditing, which tracks system activity, encryption is specifically intended to secure information while maintaining its usability for legitimate purposes. The ability to reverse the encryption process using cryptographic keys distinguishes it as a critical tool for safeguarding communications and data storage.

Encryption is widely applied in modern digital environments to protect information in transit and at rest. Protocols such as Transport Layer Security (TLS) secure web traffic, IPsec safeguards network communication, and Secure Shell (SSH) enables secure remote administration. These protocols prevent unauthorized interception and tampering, ensuring that sensitive communications—ranging from emails to financial transactions—remain private. Organizations rely on encryption to secure virtual private networks (VPNs), cloud storage, messaging platforms, and web applications, thereby mitigating the risk of data breaches, eavesdropping, or man-in-the-middle attacks.

Effective encryption requires more than simply choosing a cryptographic algorithm. Strong key management practices are essential to prevent unauthorized access or misuse. This includes generating, storing, distributing, and rotating cryptographic keys securely. The choice of encryption algorithms, such as AES for symmetric encryption or RSA for asymmetric encryption, also directly impacts the security level. Poor implementation or outdated algorithms can expose data to vulnerabilities despite encryption.

Beyond technical protection, encryption helps organizations meet regulatory and compliance requirements related to data privacy, including laws like GDPR, HIPAA, and PCI DSS. By ensuring that sensitive data is unreadable without proper authorization, encryption not only preserves confidentiality but also maintains trust with clients and stakeholders. When combined with complementary security measures such as access controls, monitoring, and multi-factor authentication, encryption forms a cornerstone of a comprehensive cybersecurity strategy, significantly reducing the risk of unauthorized access, interception, and data compromise.

Question 131

Which technique is used to obscure data within another file or medium for covert communication?

( A ) Steganography
( B ) Encryption
( C ) Tokenization
( D ) Obfuscation

Answer: A

Explanation:

Steganography is a technique used to conceal information within another file or medium in such a way that the presence of the hidden data is not detectable through casual observation. This method can embed messages, files, or other sensitive content within images, videos, audio files, or even text documents, making the communication appear normal and innocuous. Unlike encryption, which transforms data into an unreadable format but signals that the content is protected, steganography focuses on hiding the existence of the information itself. Similarly, it differs from tokenization, which replaces sensitive information with non-sensitive placeholders, and obfuscation, which complicates code or data to make it harder to understand but does not hide its existence. Steganography is therefore primarily a tool for covert communication rather than content protection.

Malicious actors sometimes use steganography to exfiltrate sensitive data without raising alarms. For example, confidential files might be hidden within innocuous images and transmitted over email or messaging platforms, bypassing conventional security monitoring systems. Attackers may also use steganography to hide malware or command-and-control instructions within media files, making it challenging for security teams to detect malicious activity. This ability to evade detection makes steganography a concern in both cybersecurity and digital forensics.

Detection and mitigation of steganographic threats require specialized techniques collectively referred to as steganalysis. Methods include analyzing file statistics for unusual patterns, detecting anomalies in image or audio frequencies, and using machine learning algorithms to identify suspicious media. Techniques such as least significant bit (LSB) manipulation, image masking, and audio embedding are common ways data is hidden, and organizations must be aware of these methods to identify potential covert channels.

Question 132

Which type of attack involves overwhelming a system, service, or network to make it unavailable to legitimate users?

( A ) Man-in-the-Middle
( B ) Denial of Service
( C ) SQL Injection
( D ) Phishing

Answer: B

Explanation:

Denial of Service (DoS) attacks are a type of cyberattack designed to disrupt the normal operation and availability of a system, network, or service. The primary goal of a DoS attack is to make a target resource unavailable to legitimate users by overwhelming it with excessive traffic, consuming critical resources, or exploiting specific vulnerabilities. Unlike Man-in-the-Middle attacks, which focus on intercepting communications, SQL Injection attacks, which exploit database vulnerabilities, or Phishing attacks, which manipulate users into revealing sensitive information, DoS attacks target the availability aspect of cybersecurity. By compromising availability, attackers can cause service interruptions, operational delays, and significant financial and reputational damage.

One common and more sophisticated variant of DoS is the Distributed Denial of Service (DDoS) attack. In a DDoS attack, multiple compromised devices, often part of a botnet, are used to simultaneously flood the target system with traffic. This amplification makes detection and mitigation significantly more challenging, as the malicious traffic originates from numerous sources and can easily overwhelm traditional defense mechanisms. Attackers may exploit weaknesses in network infrastructure, applications, or server resources, using techniques such as TCP SYN floods, HTTP request floods, or amplification attacks to maximize disruption.

To defend against DoS and DDoS attacks, organizations implement a variety of countermeasures. Rate limiting can restrict the number of requests from a single source, while network traffic filtering identifies and blocks suspicious patterns. Redundancy and load balancing distribute workloads across multiple systems, ensuring that the failure or saturation of one server does not impact overall service availability. Additionally, specialized anti-DDoS solutions and cloud-based mitigation services can absorb or divert attack traffic to maintain operational continuity.

Question 133

Which method ensures that a user’s identity is verified through multiple types of credentials?

( A ) Single Sign-On
( B ) Multi-Factor Authentication
( C ) Biometric Verification
( D ) Password Complexity

Answer: B

Explanation:

Multi-Factor Authentication (MFA) is a security mechanism designed to strengthen the verification process for users accessing systems, applications, or networks by requiring two or more distinct types of credentials. These factors typically include something the user knows, such as a password or PIN; something the user has, like a hardware token, smart card, or mobile authentication app; and something the user is, such as a biometric identifier like a fingerprint, facial recognition, or iris scan. By combining multiple factors, MFA significantly reduces the risk that unauthorized individuals can gain access to sensitive information, even if one factor, such as a password, is compromised.

Unlike Single Sign-On (SSO), which primarily focuses on streamlining access across multiple systems with a single set of credentials, MFA ensures that additional layers of verification are required beyond simple login credentials. Similarly, relying solely on biometric verification or password complexity addresses only a single dimension of authentication, leaving systems vulnerable to attacks such as phishing, credential stuffing, or password reuse. MFA mitigates these risks by requiring an attacker to bypass multiple independent factors, making unauthorized access exponentially more difficult.

Organizations implement MFA across various environments, including enterprise networks, cloud services, remote access portals, and critical applications, to protect sensitive data and maintain compliance with regulatory standards. Modern implementations often include adaptive or risk-based authentication, which adjusts the verification requirements based on contextual factors such as device reputation, location, or unusual login patterns. Token management, secure transmission of one-time codes, and proper enrollment procedures are critical to maintaining MFA’s effectiveness.

In addition to enhancing security, MFA also contributes to accountability and monitoring by creating stronger evidence of verified user actions. When combined with logging, anomaly detection, and alerting systems, MFA supports rapid response to suspicious activities and reduces the likelihood of data breaches or unauthorized transactions. Overall, MFA represents a vital component of a layered security strategy, reinforcing protection for digital assets, improving trust in user identity verification, and aligning with modern cybersecurity and regulatory requirements.

Question 134

Which security control focuses on limiting physical and logical access to systems and sensitive areas?

( A ) Administrative Control
( B ) Physical Control
( C ) Technical Control
( D ) Detective Control

Answer: B

Explanation:

Physical security controls are measures designed to safeguard an organization’s assets, facilities, and equipment from unauthorized access, theft, damage, or tampering. These controls focus on protecting tangible resources and the environments in which sensitive information and critical systems reside. Unlike administrative controls, which rely on policies, procedures, and organizational guidelines to guide behavior, physical security provides a direct, tangible barrier to threats. Similarly, unlike technical controls that rely on software, hardware, or network-based mechanisms to enforce security rules, or detective controls that monitor and alert on unauthorized activity, physical security operates at the human and environmental level to prevent incidents before they occur.

Common physical security measures include locks, access badges, security gates, surveillance cameras, biometric entry systems, and security personnel. These controls are often implemented in layers, creating multiple obstacles for unauthorized individuals attempting to gain access. For example, a secure data center may combine mantraps, card access systems, and 24/7 surveillance to prevent both accidental and deliberate intrusion. Environmental controls, such as fire suppression systems, climate regulation, and flood detection, are also part of a comprehensive physical security program, protecting assets from hazards beyond human threats.

The effectiveness of physical security increases when integrated with technical and administrative controls. For instance, access logs generated by badge readers can feed into monitoring systems, enabling organizations to detect unusual access patterns. Visitor management systems, alarm systems, and incident reporting protocols further enhance protection by ensuring that both employees and guests are accounted for and monitored appropriately.

Question 135

Which type of attack uses deceptive emails to trick users into revealing sensitive information or installing malware?

( A ) Phishing
( B ) Ransomware
( C ) Worm
( D ) Man-in-the-Middle

Answer: A

Explanation:

Phishing attacks are a form of social engineering that deceive users into revealing sensitive information or performing actions that compromise security. They typically arrive through emails, text messages, or fraudulent websites designed to appear legitimate. Attackers often impersonate trusted entities such as banks, government agencies, or colleagues, using psychological manipulation techniques like urgency, fear, or curiosity to trick victims into clicking malicious links, downloading attachments, or entering credentials on fake login pages. This type of attack is one of the most common and effective methods for initiating cyber intrusions, leading to identity theft, financial fraud, or unauthorized access to corporate systems.

Unlike ransomware, which focuses on encrypting data and demanding payment, worms, which self-replicate across networks, or man-in-the-middle attacks, which intercept active communications, phishing relies primarily on exploiting human behavior rather than technical vulnerabilities. Variants of phishing include spear phishing, which targets specific individuals or organizations; whaling, which focuses on executives or high-value targets; and clone phishing, where attackers replicate legitimate messages to insert malicious content.

Defending against phishing requires a combination of technical safeguards and human awareness. Organizations implement email filtering systems that scan for suspicious content, malicious links, or spoofed sender addresses. Advanced filtering solutions use machine learning and threat intelligence to detect evolving phishing techniques. User education and awareness training play a crucial role, helping employees recognize warning signs such as unusual requests, grammatical errors, or unexpected attachments. Simulated phishing campaigns can test and reinforce employee readiness.

Question 136

Which security measure ensures data integrity by producing a fixed-length, unique output from input data?

( A ) Encryption
( B ) Hashing
( C ) Tokenization
( D ) Digital Signature

Answer: B

Explanation: 

Hashing is a cryptographic process that converts input data of any size into a fixed-length string of characters, known as a hash value or digest. The primary purpose of hashing is to ensure data integrity, allowing systems and users to verify that information has not been altered. Unlike encryption, which is designed to protect the confidentiality of data and allows authorized parties to reverse the process, hashing is a one-way function and cannot be reversed to retrieve the original input. Similarly, unlike tokenization, which substitutes sensitive data with non-sensitive equivalents, or digital signatures, which provide both authentication and integrity verification, hashing focuses on uniquely representing data to detect any modifications.

Hashing algorithms, such as SHA-256, SHA-3, or older methods like MD5, generate deterministic outputs, meaning that the same input will always produce the same hash. This property is essential for verifying data integrity. Even a small change in the input, such as altering a single character, results in a completely different hash value, making tampering easily detectable. This characteristic is widely applied in areas like password storage, where user passwords are stored as hashes rather than plaintext. When a user logs in, the system hashes the entered password and compares it to the stored hash to validate credentials without exposing the actual password.

In addition to password protection, hashing is also used for file integrity checks, digital fingerprinting, and validating software downloads. Hashes are often combined with other cryptographic mechanisms, such as digital signatures or encryption, to enhance security. For instance, in digital signatures, hashing ensures that even a minor change in the message will invalidate the signature, thereby guaranteeing authenticity and integrity.

Question 137

Which process identifies, evaluates, and prioritizes risks to organizational assets?

( A ) Risk Assessment
( B ) Vulnerability Scanning
( C ) Incident Response
( D ) Penetration Testing

Answer: A

Explanation: 

Risk assessment is a structured and methodical process used by organizations to identify, analyze, and prioritize potential threats to their assets, operations, and overall business objectives. The primary goal of risk assessment is to understand the likelihood and impact of various risks so that organizations can make informed decisions regarding security measures, resource allocation, and operational priorities. Unlike vulnerability scanning, which focuses on detecting specific weaknesses in systems, networks, or applications, risk assessment takes a broader perspective by considering threats, vulnerabilities, and the potential consequences of adverse events. Similarly, it differs from incident response, which deals with responding to security incidents after they occur, and penetration testing, which simulates attacks to uncover exploitable flaws. Risk assessment is proactive, providing a foundation for preventing and mitigating risks before they materialize.

The process typically begins with identifying critical assets, including data, systems, personnel, and facilities. Once assets are identified, organizations evaluate potential threats, which can range from cyberattacks, insider threats, and natural disasters to operational failures or regulatory changes. Vulnerability analysis is then conducted to determine how susceptible each asset is to identified threats. By combining threat likelihood and potential impact, organizations assign risk ratings that prioritize which issues require immediate attention and which can be monitored over time.

Effective risk assessment also involves documentation and communication with stakeholders to ensure that the results are clearly understood and actionable. It serves as the basis for implementing appropriate security controls, developing mitigation strategies, and allocating resources efficiently. Additionally, risk assessment supports compliance with regulatory frameworks and industry standards by demonstrating due diligence in protecting sensitive data and critical operations.

Question 138

Which type of firewall filters traffic based on predetermined rules at the network or transport layer?

( A ) Packet-Filtering Firewall
( B ) Stateful Firewall
( C ) Proxy Firewall
( D ) Next-Generation Firewall

Answer: A

Explanation:

Packet-filtering firewalls serve as a fundamental component of network security by examining packets at the network and transport layers to determine whether they should be allowed to pass through or be blocked. These firewalls make decisions based on predefined rules that consider attributes such as source and destination IP addresses, port numbers, and the type of protocol being used. By evaluating traffic at this level, packet-filtering firewalls provide an essential first line of defense against unauthorized access, helping to prevent attackers from reaching internal network resources. Unlike stateful firewalls, which track the state of network connections to make more context-aware decisions, packet-filtering firewalls operate on individual packets independently, which simplifies operation but limits deeper inspection capabilities. They also differ from proxy firewalls, which function at the application layer and actively inspect the content of traffic, and from next-generation firewalls, which integrate features such as intrusion detection and prevention, application awareness, and advanced threat intelligence.

The simplicity and efficiency of packet-filtering firewalls make them suitable for high-performance environments where low latency is critical. They are often deployed at network perimeters to filter incoming and outgoing traffic, preventing unauthorized hosts from establishing connections while allowing legitimate communication to continue. However, their effectiveness heavily depends on careful configuration of filtering rules. Misconfigured rules can inadvertently allow malicious traffic or block legitimate services, potentially creating security gaps or operational disruptions. Therefore, continuous monitoring, auditing, and logging of firewall activity are crucial practices for maintaining security and ensuring compliance with organizational policies.

Question 139

Which principle ensures that no single individual has complete control over critical operations, reducing risk of fraud or errors?

( A ) Separation of Duties
( B ) Principle of Least Privilege
( C ) Need-to-Know
( D ) Mandatory Access Control

Answer: A

Explanation:

Separation of Duties (SoD) is a critical security and governance principle designed to reduce the risk of fraud, errors, and misuse of authority by distributing key responsibilities among multiple individuals. By ensuring that no single person has complete control over a critical process, SoD introduces accountability and oversight into organizational operations. This principle is distinct from other security approaches such as the Principle of Least Privilege, which limits user access to only the permissions necessary to perform their tasks, Need-to-Know, which restricts access to information based on relevance, and Mandatory Access Control, which enforces system-level access policies. While those controls focus primarily on access management and information protection, SoD specifically addresses operational processes and the human element in maintaining security and compliance.

Implementing Separation of Duties requires careful planning and analysis of organizational workflows. In financial operations, for example, responsibilities such as authorizing transactions, processing payments, and reconciling accounts are often divided among different individuals to prevent fraudulent activity. Similarly, in IT environments, system administration tasks may be split so that provisioning, configuration, and auditing are handled separately. Approval workflows, sensitive operational tasks, and access to high-value systems also benefit from segregation to prevent a single point of failure or misuse. SoD is especially effective when combined with auditing, logging, and continuous monitoring, as these mechanisms provide evidence of compliance, detect deviations from established procedures, and reinforce accountability.

Question 140

Which security mechanism ensures that a sender cannot deny sending a message?

( A ) Non-Repudiation
( B ) Confidentiality
( C ) Authentication
( D ) Availability

Answer: A

Explanation:

Non-repudiation is a critical security principle that ensures a sender of a message or digital transaction cannot later deny having sent it, providing verifiable proof of origin and integrity. This principle is essential in digital communications and transactions where accountability, trust, and legal enforceability are required. Unlike confidentiality, which focuses on preventing unauthorized access to data, authentication, which verifies the identity of users or systems, and availability, which ensures that systems and data are accessible when needed, non-repudiation specifically addresses the issue of accountability by creating indisputable evidence that a particular action or communication occurred. It is especially important in environments where disputes could arise, such as financial services, legal agreements, or secure email exchanges.

To achieve non-repudiation, organizations commonly use digital signatures in combination with public key infrastructure (PKI) and cryptographic hashing techniques. When a sender digitally signs a message using their private key, recipients can verify the signature with the corresponding public key. This process not only confirms the sender’s identity but also ensures that the content of the message has not been altered in transit. Trusted timestamps can also be employed to record when a message was sent or a transaction was executed, further strengthening the evidence and making it legally defensible. These measures prevent both intentional repudiation by malicious actors and accidental disputes over message integrity or origin.

Implementing non-repudiation effectively requires robust key management practices, including secure generation, storage, and revocation of cryptographic keys. Organizations must also consider compliance with legal and regulatory frameworks to ensure that digital signatures and records are recognized in legal proceedings. Maintaining audit trails of all digital communications and transactions provides additional verification and supports forensic investigations when needed. By integrating non-repudiation into digital systems, organizations enhance trust in their communications, reduce the risk of fraud or disputes, and reinforce overall cybersecurity resilience. In summary, non-repudiation is an essential element of any comprehensive cybersecurity strategy, ensuring accountability, integrity, and trustworthiness in electronic interactions.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!