Visit here for our full CompTIA SecurityX CAS-005 exam dumps and practice test questions.
Question 181
A security administrator is implementing access controls for a highly sensitive financial application. The administrator wants to ensure that users can only access resources if they meet multiple criteria, including role, time of access, and location. Which access control model is most suitable for this scenario?
( A ) Role-Based Access Control (RBAC)
( B ) Attribute-Based Access Control (ABAC)
( C ) Discretionary Access Control (DAC)
( D ) Mandatory Access Control (MAC)
Answer: B
Explanation:
Attribute-Based Access Control (ABAC) is an advanced security framework that determines access permissions based on a combination of attributes associated with users, resources, actions, and environmental conditions. Unlike traditional access models that rely solely on predefined roles or static permissions, ABAC enables dynamic and context-aware access decisions. Attributes may include a user’s department, clearance level, device type, location, or even the time of day. By evaluating these factors collectively, ABAC ensures that only authorized users can access specific resources under appropriate circumstances, offering both flexibility and precision in enforcing security policies.
This approach is particularly valuable in complex environments where static models such as Role-Based Access Control (RBAC) may not provide sufficient granularity. While RBAC simplifies administration by grouping users into roles, it lacks adaptability in situations where access must depend on changing contextual conditions. Similarly, Discretionary Access Control (DAC) allows resource owners to assign permissions, but it can introduce risks if users inadvertently grant access to unauthorized individuals. Mandatory Access Control (MAC), on the other hand, enforces rigid rules based on security classifications, which ensures strong protection but limits flexibility and scalability in dynamic operational environments.
ABAC offers a balanced solution by combining fine-grained control with adaptability. Policies can be expressed in logical statements that the system evaluates at runtime, allowing organizations to manage access dynamically as business requirements or environmental conditions change. This makes ABAC especially suitable for sectors such as finance, healthcare, and government, where sensitive data must be protected while still supporting secure collaboration and remote access.
Question 182
An organization is implementing endpoint security on all corporate laptops. The goal is to prevent unauthorized applications from executing while minimizing disruption to legitimate users. Which technology best addresses this requirement?
( A ) Antivirus software
( B ) Application whitelisting
( C ) Host-based firewall
( D ) Data Loss Prevention (DLP)
Answer: B
Explanation:
Application whitelisting is a proactive security mechanism that allows only authorized and verified software to execute within an organization’s computing environment. Unlike traditional antivirus solutions that identify and block malware based on known signatures, application whitelisting operates on the principle of default denial—only explicitly approved applications can run, while all others are automatically blocked. This approach significantly reduces the attack surface by preventing the execution of unknown, untrusted, or malicious programs, even if they have not yet been identified by signature-based detection systems.
The core advantage of application whitelisting lies in its ability to prevent zero-day attacks, ransomware, and polymorphic malware, which often evade conventional security tools. In contrast to host-based firewalls that manage network traffic flow but do not stop unauthorized applications from launching, whitelisting ensures that only legitimate, business-approved programs can operate on endpoints. Similarly, while Data Loss Prevention (DLP) systems focus on safeguarding sensitive information from being transmitted or copied outside organizational boundaries, they do not restrict software execution. Application whitelisting therefore fills a unique role by establishing strict control over what software can interact with the operating system and corporate dat( A )
For effective deployment, organizations must maintain a centralized whitelist that includes all necessary business applications, system processes, and updates. This whitelist can be managed through automated tools that synchronize with software inventories and allow for policy adjustments without manual intervention. Integrating application whitelisting with endpoint management platforms ensures scalability across large enterprise networks. Exception handling is critical to avoid workflow disruptions; legitimate applications that are newly introduced or updated must be reviewed and approved before inclusion in the whitelist.
Monitoring and logging activities are essential components of a successful application whitelisting strategy. Continuous oversight allows administrators to detect and respond to unauthorized execution attempts, ensuring that security controls remain aligned with operational needs. Regular audits and updates to the whitelist are necessary to adapt to evolving software requirements and threat landscapes.
Question 183
A company experiences frequent brute-force attacks against its web application login page. Which control would most effectively mitigate this threat without impacting legitimate users?
( A ) Multi-Factor Authentication (MFA)
( B ) SSL/TLS encryption
( C ) Network segmentation
( D ) Load balancing
Answer: A
Explanation:
Multi-Factor Authentication (MFA) is a critical security mechanism designed to strengthen access control by requiring users to verify their identity through multiple independent factors before gaining access to a system or application. Traditional authentication methods rely solely on a single factor—typically a password—which is often vulnerable to brute-force attacks, credential stuffing, or phishing attempts. MFA mitigates these risks by introducing additional verification layers that combine something the user knows (like a password or PIN), something the user has (such as a hardware token, mobile authenticator app, or smart card), and something the user is (biometric identifiers like fingerprints, facial recognition, or voice patterns).
The strength of MFA lies in its ability to prevent unauthorized access even when one factor, such as a password, has been compromised. Attackers attempting brute-force or password-guessing attacks must also possess the second or third authentication factor, making successful exploitation exponentially more difficult. Unlike SSL/TLS, which protects data confidentiality and integrity during transmission but does not stop authentication abuse, MFA directly protects the authentication process itself. Similarly, network segmentation helps isolate systems and reduce attack surfaces but cannot prevent unauthorized logins to web or cloud-based applications. Load balancing improves performance and availability but has no effect on authentication security, highlighting MFA’s unique role in access protection.
Implementing MFA across web applications, VPNs, and cloud services is considered a best practice within modern cybersecurity frameworks such as NIST and ISO 27001. Organizations can choose from various MFA methods, including one-time passwords (OTPs), push notifications, hardware tokens, and biometric verification. For maximum effectiveness, MFA should be paired with adaptive authentication, where contextual factors such as location, device, and user behavior are analyzed to determine risk before granting access.
Question 184
A company wants to ensure that all sensitive documents leaving its internal network are scanned and monitored to prevent unauthorized distribution. Which technology should be implemented?
( A ) Intrusion Detection System (IDS)
( B ) Data Loss Prevention (DLP)
( C ) Antivirus software
( D ) Virtual Private Network (VPN)
Answer: B
Explanation:
Data Loss Prevention (DLP) is a comprehensive security approach that focuses on identifying, monitoring, and controlling the movement of sensitive data across an organization’s digital environment. The primary goal of DLP is to prevent the unauthorized disclosure, transmission, or misuse of confidential information—whether intentional or accidental. DLP systems continuously inspect data in motion across networks, data at rest within storage systems, and data in use on endpoints to ensure that it remains protected according to organizational policies and regulatory requirements.
Unlike intrusion detection systems (IDS), which primarily focus on identifying malicious activity or network intrusions, DLP solutions are specifically designed to safeguard information by controlling how data is accessed and shared. Similarly, antivirus software focuses on detecting and removing malware from endpoints but does not provide visibility or control over sensitive data movement. While virtual private networks (VPNs) secure data transmission through encryption, they cannot stop authorized users from copying or transferring sensitive information outside the organization’s boundaries. DLP fills this critical gap by applying rules and context-aware policies to prevent data exfiltration through channels such as email, cloud storage, web applications, and removable medi( A )
DLP solutions use advanced content inspection techniques, including pattern matching, keyword detection, and fingerprinting, to identify sensitive information such as credit card numbers, personal identification data, or trade secrets. They also incorporate contextual analysis to understand how and where data is being used. When potential violations are detected, DLP can automatically block transfers, quarantine files, or encrypt communications to ensure compliance with standards such as GDPR, HIPAA, or PCI DSS.
Question 185
An organization wants to identify potential vulnerabilities in its web applications before attackers exploit them. Which approach provides proactive assessment of application security?
( A ) Penetration testing
( B ) Vulnerability scanning
( C ) Security audit
( D ) Log analysis
Answer: A
Explanation:
Penetration testing is a structured security practice in which authorized professionals simulate real-world cyberattacks to identify and exploit vulnerabilities within an organization’s web applications, networks, or systems. The objective is to assess how well existing security controls can withstand actual attack scenarios and to uncover weaknesses that could lead to data breaches, unauthorized access, or system compromise. This process goes beyond automated scanning by using human expertise to test the exploitability of vulnerabilities and evaluate their potential impact on business operations.
Unlike vulnerability scanning, which relies on automated tools to detect known issues, penetration testing provides a hands-on, scenario-based assessment that mimics the techniques and strategies of real attackers. It also differs from security audits, which focus primarily on evaluating security policies, governance frameworks, and regulatory compliance rather than actively exploiting weaknesses. Similarly, log analysis helps identify suspicious activities or past security incidents but does not proactively uncover vulnerabilities before they are exploited. Penetration testing bridges this gap by identifying both known and unknown risks in a controlled and ethical manner.
A typical penetration test includes several phases such as reconnaissance, scanning, exploitation, privilege escalation, post-exploitation, and reporting. During these stages, testers examine areas like authentication mechanisms, input validation, access controls, and network configurations. They attempt to exploit discovered weaknesses to understand the depth of exposure and the potential consequences if a malicious attacker were to target the same vulnerabilities.
Question 186
A network administrator is configuring firewalls to allow only specific traffic to web servers while blocking all other inbound connections. Which firewall rule type should be applied?
( A ) Default allow
( B ) Implicit deny
( C ) Stateful allow
( D ) Stateless permit
Answer: B
Explanation:
Implicit deny is a foundational security principle that ensures all network traffic is blocked by default unless explicitly permitted by predefined rules. This approach enforces strict control over communication channels and minimizes the potential for unauthorized access or data exposure. By denying all connections initially, administrators can selectively allow only the specific types of traffic necessary for legitimate business operations, such as HTTP or HTTPS requests to a web server. This principle directly supports the concept of least privilege by ensuring that no unnecessary or unintended connections are permitted within the network.
In contrast, default allow policies operate under the assumption that all traffic is trusted unless explicitly blocked. While such configurations may simplify connectivity, they significantly increase the organization’s attack surface by allowing potential malicious or unauthorized communications to pass through unchecked. Similarly, stateful allow firewalls maintain awareness of active connection states and can automatically permit returning traffic that is part of a legitimate session, but they still depend on carefully crafted explicit rules to determine what types of connections are initially allowed. Stateless permit firewalls, on the other hand, make decisions based solely on packet attributes—such as IP addresses and ports—without tracking connection states, which can lead to inconsistencies and potential vulnerabilities.
Implementing an implicit deny policy within firewall configurations and access control lists creates a secure default stance where traffic must meet defined criteria before being accepted. This approach requires administrators to think critically about which services, protocols, and ports should be accessible and to continuously review and update rules as network requirements evolve.
In addition to rule enforcement, implicit deny should be supported by robust logging and monitoring mechanisms. These tools provide visibility into blocked and allowed traffic, helping identify misconfigurations, attempted intrusions, or unauthorized access attempts. Regular audits and real-time analysis of firewall logs ensure that the policy remains effective and aligned with organizational needs.
Question 187
A company requires encryption for emails to ensure confidentiality and verify sender identity. Which solution is most appropriate for this scenario?
( A ) Transport Layer Security (TLS)
( B ) Pretty Good Privacy (PGP)
( C ) VPN
( D ) Secure FTP (SFTP)
Answer: B
Explanation:
Pretty Good Privacy (PGP) is a cryptographic protocol designed to provide robust security for email communications by ensuring both confidentiality and authenticity. Unlike protocols that only secure email in transit, such as TLS, PGP offers true end-to-end encryption, meaning that messages are encrypted on the sender’s device and can only be decrypted by the intended recipient. This approach guarantees that even if the email is intercepted during transmission, its content remains unreadable to unauthorized parties. Additionally, PGP incorporates digital signatures, which allow the recipient to verify the sender’s identity and confirm that the message has not been altered since it was signed. This combination of encryption and signature verification addresses both privacy and integrity concerns.
While tools like VPNs protect network traffic by creating secure tunnels, they do not provide encryption specific to email content or ensure the authenticity of the sender. Similarly, SFTP is effective for securely transferring files but is not designed to protect the contents of email messages. PGP, by contrast, leverages asymmetric cryptography through the use of public and private key pairs. The sender encrypts the message using the recipient’s public key, ensuring that only the recipient with the corresponding private key can decrypt it. For digital signatures, the sender generates a signature using their private key, which the recipient can verify using the sender’s public key. This mechanism ensures both confidentiality and non-repudiation, preventing attackers from tampering with messages or impersonating the sender.
Implementing PGP within an organization requires careful attention to key management, including the secure generation, storage, and revocation of keys. Training users on proper usage is also critical, as mishandling keys or failing to verify signatures can compromise security. When deployed effectively, PGP protects sensitive communications from eavesdropping, ensures that critical information remains confidential, and helps organizations comply with privacy regulations and industry standards. By providing a reliable method for secure messaging, PGP enhances trust, safeguards intellectual property, and reinforces overall email security posture.
Question 188
A system administrator wants to monitor and alert on unusual user behavior, such as multiple failed login attempts or access outside normal hours. Which security solution best supports this requirement
( A ) SIEM
( B ) IDS
( C ) Antivirus
( D ) Patch management system
Answer: A
Explanation:
Security Information and Event Management (SIEM) systems play a central role in modern cybersecurity by providing comprehensive monitoring, analysis, and management of security-related data across an organization’s IT infrastructure. At its core, SIEM collects logs and events from a wide variety of sources, including servers, network devices, applications, endpoints, and security tools. These logs are then normalized, correlated, and analyzed to identify patterns or behaviors that could indicate potential security threats. Unlike intrusion detection systems (IDS), which primarily monitor network traffic for known attack signatures or anomalies, SIEM provides a holistic view by integrating data from multiple systems and creating context around events. Similarly, antivirus software focuses on identifying and removing malware, while patch management ensures that systems remain up to date, but neither solution provides the real-time event correlation and comprehensive visibility offered by SIEM.
One of the primary advantages of a SIEM system is its ability to detect abnormal or suspicious activities proactively. For instance, it can flag repeated failed login attempts, unusual access from geographically unexpected locations, abnormal privilege escalations, or irregular data transfers. By correlating these activities with historical logs, SIEM can identify threats that might otherwise go unnoticed, including insider threats or coordinated attacks that span multiple systems. The system can also generate alerts in real-time, enabling security teams to respond quickly to potential incidents before they escalate into significant breaches.
In addition to threat detection, SIEM systems support incident response by providing detailed records of events and activities, giving security analysts the context needed to investigate and remediate incidents efficiently. Integration with automated response tools allows for rapid containment actions, such as isolating affected systems or blocking suspicious accounts, reducing potential damage. Furthermore, SIEM supports regulatory compliance by maintaining auditable logs, producing reports, and demonstrating adherence to security policies and standards.
When implemented effectively, SIEM enhances an organization’s security posture by improving situational awareness, enabling timely detection of attacks, reducing dwell time, and facilitating faster response to security incidents. It also provides strategic insights into recurring vulnerabilities or policy violations, helping organizations strengthen defenses and mitigate risks over time. By combining real-time monitoring, historical analysis, and automated alerting, SIEM serves as a cornerstone for proactive and intelligence-driven cybersecurity programs.
Question 189
A developer wants to prevent sensitive information from being exposed in source code repositories. Which security practice is most effective?
( A ) Code obfuscation
( B ) Secrets management
( C ) Input validation
( D ) Static code analysis
Answer: B
Explanation:
Secrets management tools are essential components in modern cybersecurity and secure software development practices, designed to store, manage, and protect sensitive information such as API keys, passwords, encryption keys, certificates, and other credentials. These tools address a critical risk in software development: the accidental exposure of secrets in source code repositories, configuration files, or deployment scripts. Hard-coding sensitive information in code or sharing it through insecure channels can lead to severe security breaches, as attackers often target these exposed secrets to gain unauthorized access to systems, databases, or cloud services.
Unlike code obfuscation, which only conceals application logic but does not protect sensitive data, or input validation, which prevents injection attacks without safeguarding stored secrets, secrets management provides a systematic and secure approach to handling confidential information. Similarly, static code analysis tools may identify potential coding vulnerabilities or weak practices, but they cannot prevent the misuse or leakage of hard-coded credentials. By using a secrets management system, organizations ensure that sensitive data is stored outside the application code, encrypted, and accessed securely during runtime. These tools also often include auditing capabilities, automatic rotation of secrets, access control policies, and detailed logging to track who accessed what and when, reducing the risk of misuse or insider threats.
Integrating secrets management into the software development lifecycle supports secure DevOps practices, enabling developers to retrieve credentials dynamically and safely without exposing them in code repositories. This reduces the risk of breaches due to leaked credentials or misconfigured environments and ensures compliance with regulatory frameworks that mandate secure handling of sensitive information. Properly configured secrets management enforces the principle of least privilege, limiting access to only those components or users that require it, while automated rotation and expiration of secrets further minimize potential attack vectors.
Question 190
A company wants to restrict which devices can connect to the corporate network. Which technology enforces this type of access control?
( A ) Network Access Control (NAC)
( B ) Firewall
( C ) IDS
( D ) VPN
Answer: A
Explanation:
Network Access Control (NAC) is a crucial security solution designed to manage and enforce access policies for devices attempting to connect to an organization’s network. Unlike firewalls, which primarily control the flow of network traffic, or intrusion detection systems (IDS), which monitor and alert on suspicious activity without restricting access, NAC focuses on the posture and compliance of devices before they are granted network entry. Virtual Private Networks (VPNs) provide secure remote connectivity, but they do not evaluate whether a device meets organizational security requirements. NAC bridges this gap by ensuring that only devices meeting predefined security standards can access network resources, thereby reducing the risk of malware propagation, data breaches, and unauthorized access.
NAC solutions typically evaluate devices based on multiple criteria, including operating system patch levels, antivirus and endpoint protection status, device type, configuration settings, and compliance with corporate security policies. If a device fails to meet these standards, NAC can quarantine the device, restrict it to a remediation network, or block access entirely until compliance is achieved. This capability allows organizations to maintain a controlled and secure network environment while supporting a diverse array of endpoints, including desktops, laptops, mobile devices, and Internet of Things (IoT) systems.
Integration with directory services, such as Active Directory, and centralized security management platforms allows NAC solutions to enforce consistent policies across both wired and wireless networks. Real-time monitoring provides visibility into all connected devices, enabling administrators to quickly identify non-compliant endpoints and take corrective action. Additionally, NAC supports automated remediation workflows, allowing devices to update antivirus definitions, install missing patches, or adjust configurations before being granted full network access.
By implementing NAC, organizations enhance their security posture by enforcing the principle of least privilege, ensuring that only authorized and compliant devices access sensitive resources. This not only reduces the attack surface but also helps meet regulatory requirements, such as HIPAA, PCI DSS, or GDPR, which mandate strict control over network access and device security. NAC ultimately strengthens endpoint security, protects against lateral movement of threats within the network, and supports a proactive approach to maintaining a secure and resilient IT environment.
Question 191
An organization wants to implement a security solution that automatically detects and responds to suspicious activity on endpoints. Which technology best fits this requirement?
( A ) Antivirus software
( B ) Endpoint Detection and Response (EDR)
( C ) Data Loss Prevention (DLP)
( D ) Intrusion Detection System (IDS)
Answer: B
Explanation:
Endpoint Detection and Response (EDR) is a modern cybersecurity solution designed to provide continuous monitoring, threat detection, and automated response for endpoints, including laptops, desktops, servers, and increasingly mobile or IoT devices. Unlike traditional antivirus software, which primarily relies on signature-based detection to identify known malware, EDR leverages advanced technologies such as behavioral analytics, machine learning, and threat intelligence to identify anomalous activities that may indicate compromise. This allows EDR systems to detect a wide range of threats, including zero-day attacks, fileless malware, and sophisticated lateral movement across networks, which traditional security tools often miss. Unlike Data Loss Prevention (DLP) systems, which focus on preventing sensitive data from leaving the organization, or Intrusion Detection Systems (IDS), which monitor network traffic for malicious activity but do not provide endpoint-level mitigation, EDR operates directly on endpoints and enables active defense measures.
EDR platforms continuously collect detailed telemetry data from endpoints, including process execution, file changes, registry modifications, system calls, and network activity. This rich dataset allows security teams to perform in-depth forensic investigations, reconstruct attack chains, and understand how threats propagate within the environment. When suspicious behavior is detected, EDR solutions can automatically or semi-automatically respond by isolating the affected endpoint, terminating malicious processes, or quarantining files to prevent further damage. These capabilities significantly reduce response times, allowing organizations to contain threats before they escalate into widespread incidents.
In addition to real-time detection and response, EDR solutions often integrate with Security Information and Event Management (SIEM) systems to provide centralized monitoring, correlation of events, and reporting for compliance purposes. This integration helps organizations maintain a holistic security posture, combining endpoint visibility with network-level monitoring to detect complex attack scenarios. By implementing EDR, organizations enhance their ability to defend against advanced persistent threats, ransomware, insider threats, and other evolving cyber risks. The proactive monitoring, investigative capabilities, and automated response features of EDR not only strengthen security defenses but also reduce the operational impact of incidents, enabling security teams to act quickly and effectively to protect critical assets and maintain business continuity.
Question 192
A company wants to secure sensitive cloud-hosted data while maintaining strong user authentication. Which approach provides both encryption and user verification?
( A ) Virtual Private Network (VPN)
( B ) Cloud Access Security Broker (CASB)
( C ) Multi-Factor Authentication (MFA)
( D ) Secure Sockets Layer (SSL)
Answer: B
Explanation:
A Cloud Access Security Broker (CASB) is a security solution that sits between users and cloud service providers to provide visibility, control, and protection over cloud applications and dat( A ) It acts as a gatekeeper, enforcing organizational security policies while users access cloud resources, whether from on-premises networks or remote locations. One of the key functionalities of a CASB is to secure data both in transit and at rest. It can apply encryption and tokenization to sensitive information, ensuring that even if cloud storage is compromised, unauthorized parties cannot access or misuse the dat( A ) CASBs also provide strong authentication controls, often integrating with identity providers to enforce Multi-Factor Authentication (MFA), role-based access, device checks, and geolocation restrictions. This ensures that only authorized users can access specific cloud resources under approved conditions.
Unlike VPNs, which encrypt traffic between the user and the network but do not offer fine-grained visibility or policy enforcement for cloud applications, CASBs provide centralized control over cloud usage. They help organizations monitor and manage both sanctioned applications, like approved SaaS tools, and unsanctioned or “shadow IT” services, which pose a risk to sensitive information. SSL/TLS can secure data in transit, but it does not provide policy enforcement or logging for compliance purposes. MFA enhances authentication security but does not address data leakage, cloud usage monitoring, or regulatory adherence. CASBs fill this gap by combining access controls, threat protection, and compliance enforcement in a single solution.
CASBs also provide advanced analytics and logging capabilities, allowing organizations to track user activity, detect suspicious behavior, and respond to potential security incidents. Adaptive policies can automatically adjust access or restrict risky actions based on contextual factors, such as unusual login locations or atypical device usage. By implementing a CASB, organizations can maintain the confidentiality, integrity, and availability of sensitive data hosted in the cloud, prevent unauthorized access and insider threats, and demonstrate compliance with industry regulations like GDPR, HIPAA, and PCI DSS. This comprehensive approach to cloud security ensures that organizations can safely leverage cloud services without compromising security or compliance.
Question 193
An organization wants to prevent ransomware from encrypting critical files on employee devices. Which control is most effective for this purpose?
( A ) Endpoint backups
( B ) Application whitelisting
( C ) Firewalls
( D ) SIEM
Answer: B
Explanation:
Application whitelisting is a highly effective security strategy designed to prevent ransomware and other forms of malicious software from executing on endpoints. It works by allowing only pre-approved, verified applications to run, effectively blocking any unknown or untrusted programs from executing. This preventive approach is critical in stopping ransomware, which relies on running unauthorized code to encrypt files and disrupt business operations. By enforcing strict execution policies, application whitelisting prevents malicious programs from gaining a foothold, reducing the likelihood of successful attacks and the resulting operational damage.
Unlike endpoint backups, which are essential for data recovery after an attack, application whitelisting stops the ransomware from running in the first place, thereby minimizing or even eliminating the need for restoration. Firewalls, while important for controlling network traffic, cannot prevent malicious files from executing locally on a device. Similarly, Security Information and Event Management (SIEM) systems provide valuable monitoring, alerting, and forensic capabilities but do not actively prevent ransomware execution. Application whitelisting fills this gap by providing proactive protection, ensuring that only trusted software is permitted to operate on corporate devices.
Deployment of application whitelisting can be managed centrally, which allows organizations to enforce consistent policies across all endpoints, streamline updates to the whitelist, and reduce administrative overhead. This centralized control is particularly useful in large enterprise environments where maintaining consistent security policies can be challenging. To maximize effectiveness, application whitelisting should be integrated into a layered security strategy. Combining it with endpoint protection solutions, timely operating system and software patching, and comprehensive user awareness training ensures multiple lines of defense against ransomware.
This layered approach not only reduces the likelihood of infection but also minimizes potential operational disruptions. It helps maintain business continuity by ensuring that critical systems remain functional and protected against unauthorized code execution. Organizations that implement application whitelisting as part of a broader security framework can significantly strengthen their resilience against ransomware threats, safeguard sensitive data, and maintain the integrity and availability of their IT infrastructure.
Question 194
A company wants to ensure that all removable media connected to corporate laptops are encrypted automatically. Which solution provides this capability?
( A ) Full Disk Encryption (FDE)
( B ) Endpoint DLP
( C ) File-level encryption
( D ) BitLocker or similar drive encryption
Answer: D
Explanation:
Drive encryption tools, such as BitLocker, play a critical role in protecting sensitive data stored on removable media, including USB drives and external hard drives. These tools provide automatic encryption, ensuring that the information on these devices remains inaccessible if the media is lost, stolen, or misplaced. By encrypting the entire storage device, drive encryption prevents unauthorized access even if the physical device falls into the wrong hands, thereby significantly reducing the risk of data breaches and information theft.
While full disk encryption (FDE) protects internal drives on laptops and desktops, it may not automatically extend protection to removable media, leaving sensitive data on external drives vulnerable if not manually encrypted. File-level encryption, on the other hand, requires users to selectively encrypt individual files, which can be inconsistent and prone to human error. Users may forget to encrypt certain files, or apply weaker encryption methods, increasing the likelihood of exposure. Endpoint Data Loss Prevention (DLP) solutions can monitor and restrict transfers of sensitive data to removable media but typically do not provide automatic encryption, which means the security of the data relies heavily on user compliance.
BitLocker and similar drive encryption solutions address these gaps by integrating seamlessly with operating systems to offer transparent encryption for removable devices. Users can access their encrypted drives without complex steps, while organizations maintain control through centralized management of encryption policies. This ensures consistent application of encryption standards across all devices, reducing administrative overhead and minimizing human error.
Automated encryption of removable media not only protects sensitive data but also helps organizations comply with regulatory standards such as GDPR, HIPAA, and PCI DSS, which require the safeguarding of personal and financial information. By deploying these tools, organizations can enforce strong security controls over portable devices, mitigate risks associated with lost or stolen media, and maintain the confidentiality and integrity of critical dat( A ) Overall, drive encryption solutions like BitLocker provide both practical usability for end users and robust protection for organizational data, forming a vital component of a comprehensive data security strategy.
Question 195
A network administrator needs to prevent attackers from discovering internal IP addresses and network structure during a reconnaissance attempt. Which technique is most effective?
( A ) Network segmentation
( B ) NAT (Network Address Translation)
( C ) IDS
( D ) VPN
Answer: B
Explanation:
Network Address Translation (NAT) is a crucial network security mechanism that helps protect internal networks by hiding private IP addresses when devices communicate over external networks. By translating internal, private IP addresses into public addresses, NAT prevents external attackers from directly identifying or mapping internal devices, effectively obfuscating the internal network structure. This concealment makes it significantly harder for malicious actors to conduct reconnaissance activities such as port scanning, network enumeration, or mapping internal network topologies, which are common preliminary steps in targeted attacks.
While network segmentation enhances security by limiting lateral movement within a network, it does not inherently hide IP addresses from external entities. Similarly, intrusion detection systems (IDS) can detect suspicious activity or attacks but do not prevent attackers from discovering internal network addresses. Virtual private networks (VPNs) provide encrypted remote access and secure communication channels but cannot mask internal IP addresses if an external system attempts to probe exposed network resources. NAT fills this gap by serving as a boundary between internal devices and the wider internet, providing both a level of obfuscation and a measure of protection against direct attacks.
Beyond security benefits, NAT also contributes to operational efficiency by allowing organizations to use private IP address ranges internally while conserving public IP addresses. This capability is particularly valuable for large networks where public IP availability may be limited. Organizations often combine NAT with additional security measures such as firewalls and intrusion prevention systems to create a layered defense strategy. Firewalls can enforce access control policies at the NAT boundary, while intrusion prevention systems can detect and block suspicious traffic attempting to exploit exposed services.
By implementing NAT, organizations reduce the visibility of internal network resources to potential attackers, limit exposure to external threats, and mitigate risks associated with unauthorized access. When integrated with complementary security controls, NAT not only protects internal hosts but also strengthens overall network resilience, ensuring that both operational needs and security objectives are effectively balanced.
Question 196
A company wants to implement a security policy that requires employees to change passwords every 90 days and prohibits reuse of the last five passwords. Which security principle is being applied?
( A ) Least Privilege
( B ) Password Policy Enforcement
( C ) Multi-Factor Authentication
( D ) Role-Based Access Control
Answer: B
Explanation:
Password policy enforcement ensures that users adhere to security standards, such as regular password changes, complexity requirements, and history restrictions. Changing passwords periodically and preventing reuse minimizes the risk of compromised credentials being used for unauthorized access. Least privilege restricts access to only necessary resources and does not directly enforce password rules. Multi-Factor Authentication adds layers to authentication but does not address password reuse. RBAC assigns permissions based on roles rather than focusing on credential policies. By implementing password policies, organizations improve overall account security, reduce the likelihood of brute-force or credential-stuffing attacks, and comply with regulatory standards. Regular audits and education programs reinforce compliance and encourage safe password practices, complementing other security measures like MFA and account monitoring.
Question 197
An organization wants to detect and prevent unauthorized changes to critical system files on servers. Which security control should be implemented?
( A ) File Integrity Monitoring (FIM)
( B ) Antivirus software
( C ) SIEM
( D ) Network segmentation
Answer: A
Explanation:
File Integrity Monitoring (FIM) monitors critical system files for changes, unauthorized modifications, or tampering. Alerts are triggered when deviations from approved configurations are detected. Antivirus software primarily detects malware, SIEM provides event aggregation and correlation, and network segmentation separates traffic but does not monitor file integrity. FIM ensures the integrity of operating system files, application binaries, and configuration files, which is crucial for preventing insider attacks, malware tampering, and unauthorized system modifications. By integrating FIM with SIEM, organizations can correlate alerts with other security events, streamline incident response, and maintain compliance with standards such as PCI DSS, HIPAA, and ISO 27001. FIM provides continuous assurance that critical systems remain secure, reduces attack surface risk, and enhances forensic capabilities for post-incident investigations.
Question 198
A company is designing a disaster recovery plan for its data center. Which type of backup ensures minimal data loss in the event of a system failure?
( A ) Full backup
( B ) Differential backup
( C ) Incremental backup
( D ) Real-time replication
Answer: D
Explanation:
Real-time replication continuously copies data to a secondary site, ensuring minimal data loss during system failures. Full backups capture the entire dataset periodically but may result in data loss between backups. Differential backups copy changes since the last full backup, and incremental backups copy changes since the last backup, both introducing potential data gaps. Real-time replication ensures high availability, near-zero recovery point objectives (RPO), and rapid recovery, making it ideal for critical systems requiring continuous operation. Implementing replication also supports disaster recovery strategies by maintaining synchronized, geographically separate copies of data, enabling organizations to restore operations immediately after a primary site failure while minimizing downtime and business impact.
Question 199
A security administrator wants to prevent attackers from exploiting outdated software vulnerabilities. Which process addresses this requirement effectively?
( A ) Vulnerability scanning
( B ) Patch management
( C ) Penetration testing
( D ) Security audit
Answer: B
Explanation:
Patch management involves regularly applying updates to operating systems, applications, and firmware to address known vulnerabilities. Keeping software up to date prevents attackers from exploiting unpatched weaknesses. Vulnerability scanning identifies potential issues but does not remediate them. Penetration testing simulates attacks but is periodic rather than continuous. Security audits assess compliance but may not ensure timely patching. A robust patch management process includes automated detection of missing patches, testing, deployment, and verification to ensure that systems remain secure. Proper patch management reduces attack surfaces, mitigates zero-day exploit risks, ensures regulatory compliance, and supports overall security posture by proactively addressing software vulnerabilities.
Question 200
An organization wants to ensure that sensitive emails cannot be sent to unauthorized recipients. Which security control provides this capability?
( A ) Data Loss Prevention (DLP)
( B ) Antivirus software
( C ) Network segmentation
( D ) VPN
Answer: A
Explanation:
Data Loss Prevention (DLP) policies inspect outbound communications such as emails and prevent sensitive data from leaving the organization. DLP can block, quarantine, or encrypt emails containing confidential information based on predefined rules. Antivirus software detects malware but does not enforce data-specific restrictions. Network segmentation limits traffic flow but does not monitor content. VPN encrypts communication but does not prevent sending sensitive data to unauthorized recipients. By implementing DLP for email, organizations ensure regulatory compliance, protect intellectual property, prevent accidental data leaks, and maintain control over confidential communications. DLP solutions often include content analysis, contextual awareness, and reporting, enabling both preventive and detective controls that safeguard critical information.