CompTIA SecurityX CAS-005 Exam Dumps and Practice Test Questions Set 5 Q 81-100

Visit here for our full CompTIA SecurityX CAS-005 exam dumps and practice test questions.

Question 81

Which method allows organizations to restrict access to sensitive systems based on defined roles and responsibilities?

( A ) Mandatory Access Control
( B ) Role-Based Access Control
( C )  Discretionary Access Control
( D ) Attribute-Based Access Control

Answer: B

Explanation:

Role-Based Access Control (RBAC) is a structured approach to managing access to systems, applications, and data by assigning permissions based on defined roles rather than individual users. In RBAC, each role corresponds to a specific set of responsibilities within an organization, and users are granted access according to the roles they occupy. This ensures that employees have the minimum privileges necessary to perform their duties, reducing the potential for unauthorized access or accidental misuse of resources. Unlike Mandatory Access Control (MAC), which enforces centralized policies without user discretion, Discretionary Access Control (DAC), which allows resource owners to determine access, or Attribute-Based Access Control (ABAC), which dynamically evaluates user attributes and environmental conditions, RBAC offers a balance of structure, simplicity, and scalability, particularly suited for enterprise environments with large numbers of users and complex workflows.

One of the key advantages of RBAC is its ability to minimize the risk of privilege abuse and insider threats. By aligning permissions with roles, organizations can enforce the principle of least privilege, ensuring users only have access to the information and systems necessary for their jo( B ) functions. This approach not only strengthens security but also supports regulatory compliance by providing clear documentation of who has access to what resources and why. RBAC also facilitates separation of duties, an important control in preventing fraud and operational errors, by ensuring that critical tasks require multiple roles or approval workflows.

Effective implementation of RBAC requires careful planning, including defining roles, mapping permissions accurately, and regularly reviewing role assignments as employees change positions or responsibilities. Permission audits and ongoing monitoring are essential to ensure that access rights remain appropriate over time. Integrating RBAC with other security practices, such as multi-factor authentication and logging, further enhances overall security posture. When properly deployed, RBAC streamlines access management, reduces administrative overhead, and provides a robust framework for maintaining operational security and compliance, making it a cornerstone of modern enterprise cybersecurity strategy.

Question 82

Which attack vector involves intercepting and modifying communications between two parties without their knowledge?

( A ) Man-in-the-Middle
( B ) Phishing
( C )  Replay Attack
( D ) Brute Force

Answer: A

Explanation: 

Man-in-the-Middle (MitM) attacks are a form of cyberattack where an attacker secretly intercepts or alters communications between two parties without their knowledge. The goal of these attacks is typically to eavesdrop on sensitive information, manipulate data in transit, or impersonate one of the communicating entities. Unlike phishing, which relies on tricking users into divulging credentials or personal information, replay attacks, which involve retransmitting captured data to gain unauthorized access, or brute force attacks, which attempt to guess passwords systematically, MitM attacks operate in real time by inserting the attacker into the communication channel itself. This makes them particularly dangerous because the victim often remains unaware that the communication has been compromised.

MitM attacks can occur in a variety of contexts, with unsecured public Wi-Fi networks being one of the most common. Attackers may set up rogue access points or use packet-sniffing tools to capture unencrypted data. Other common MitM techniques include DNS spoofing, where users are redirected to malicious websites; ARP poisoning, which manipulates local network routing to intercept traffic; and SSL/TLS stripping, which downgrades encrypted connections to plaintext. These attacks can result in the theft of sensitive information such as login credentials, financial data, or intellectual property, as well as session hijacking and the injection of malicious commands.

Question 83

Which type of firewall inspects the full context of network traffic and makes decisions based on application-level information?

( A ) Packet-Filtering Firewall
( B ) Stateful Firewall
( C )  Next-Generation Firewall
( D ) Circuit-Level Gateway

Answer: C

Explanation: 

Next-Generation Firewalls (NGFWs) represent a significant advancement over traditional firewall technologies by providing more comprehensive and intelligent security measures at the network perimeter. While conventional firewalls such as packet-filtering firewalls focus solely on examining packet headers, stateful firewalls track the state of active connections, and circuit-level gateways manage TCP or UDP sessions, NGFWs operate at the application layer and combine multiple security functions in a single platform. This capability allows them to analyze network traffic in depth, identify threats that may bypass standard firewalls, and enforce security policies based on the specific applications or users involved rather than just IP addresses or ports.

NGFWs include advanced features such as intrusion prevention systems (IPS), deep packet inspection, application awareness, and integration with threat intelligence feeds. These capabilities enable organizations to detect and block sophisticated attacks, including malware, zero-day exploits, and advanced persistent threats, that often evade simpler firewall models. NGFWs also provide visibility into encrypted traffic, which has become increasingly important as more applications and communications use SSL/TLS encryption. By understanding the context of network traffic, administrators can implement granular policies that control access to specific applications, users, or services while maintaining business functionality.

Effective deployment of NGFWs requires careful planning, configuration, and ongoing management. Security teams must accurately define access and inspection policies, regularly update threat signatures, and continuously monitor logs to identify anomalies, performance issues, or potential security breaches. Integration with other security tools, such as endpoint protection, security information and event management (SIEM) systems, and threat intelligence platforms, further enhances the NGFW’s ability to detect and respond to emerging threats.

Question 84

Which policy defines acceptable use of organizational resources and outlines employee responsibilities?

( A ) Incident Response Policy
( B ) Acceptable Use Policy
( C )  Data Retention Policy
( D ) Business Continuity Policy

Answer: B

Explanation:

An Acceptable Use Policy (AUP) is a formal document that establishes clear rules and guidelines regarding the use of an organization’s IT resources, including networks, systems, devices, and applications. Unlike Incident Response Policies, which are designed to guide the organization’s actions during and after security incidents, Data Retention Policies, which dictate how long data should be stored and how it should be handled, and Business Continuity Policies, which focus on maintaining operational functionality during disruptions, an AUP specifically defines acceptable and prohibited behaviors for users within the organizational IT environment. Its primary purpose is to protect organizational assets, minimize security risks, and promote responsible and ethical use of technology.

AUPs typically cover a wide range of topics, including acceptable internet and email usage, guidelines for installing and using software, proper password management, rules regarding social media activity, and restrictions on accessing or sharing sensitive information. By explicitly communicating these expectations, organizations help employees understand their responsibilities and the potential consequences of non-compliance. Training and awareness programs play a critical role in reinforcing the principles of the AUP, ensuring that staff can recognize risky behavior, avoid unintentional security breaches, and respond appropriately when encountering suspicious activity.

Enforcing an AUP involves continuous monitoring of IT usage to ensure compliance, as well as clearly defined disciplinary measures for violations. This enforcement not only deters misuse but also helps prevent insider threats, limits the potential for malware infections, and reduces the likelihood of phishing attacks or other social engineering exploits. AUPs also support regulatory compliance by ensuring that employees follow best practices for data protection and information security.

Question 85

Which technique can detect unauthorized changes to files or systems by comparing current states to known baselines?

( A ) Penetration Testing
( B ) File Integrity Monitoring
( C ) Vulnerability Scanning
( D ) Network Sniffing

Answer: B

Explanation:

File Integrity Monitoring (FIM) is a security practice designed to detect and alert organizations to unauthorized or unexpected changes in files, directories, or critical system configurations. It works by establishing a known, trusted baseline of file states and continuously comparing the current state of these files against that baseline. Any deviations from the baseline, such as modifications, additions, or deletions, are flagged for review. Unlike Penetration Testing, which actively probes a system for vulnerabilities, Vulnerability Scanning, which identifies security weaknesses in software or configurations, or Network Sniffing, which monitors network traffic for suspicious activity, FIM operates at the system and application level to ensure the integrity of files and configurations.

FIM is particularly valuable for detecting unauthorized changes that could indicate malicious activity, such as malware infections, ransomware deployment, or insider threats. It also helps organizations identify configuration drift, which occurs when systems gradually diverge from their approved configurations, potentially creating security gaps. Implementation of FIM often relies on cryptographic techniques such as checksums or hash functions, which create unique identifiers for files. Any alteration to a file results in a change to its hash, providing a clear signal that the file may have been tampered with. Additionally, FIM tools typically maintain detailed logs of changes, which can be crucial for forensic investigations and compliance reporting.

Integrating FIM with centralized security information and event management (SIEM) platforms enhances its effectiveness by enabling real-time monitoring, automated alerting, and correlation with other security events. Regular maintenance, such as updating baselines to reflect legitimate changes and defining thresholds to reduce false positives, is essential for accurate monitoring. By implementing robust FIM practices, organizations can strengthen operational integrity, support regulatory compliance, and enhance incident response capabilities. Prompt detection of unauthorized changes minimizes the risk of data loss, system instability, and potential security breaches, ensuring that critical systems remain reliable and secure over time.

Question 86

Which attack exploits vulnerabilities in software by injecting malicious commands into an input field?

( A ) Cross-Site Scripting
( B ) SQL Injection
( C ) Buffer Overflow
( D ) Directory Traversal

Answer: B

Explanation:

SQL Injection is a type of attack in which an attacker exploits vulnerabilities in a we( B ) application’s input fields to execute malicious SQL statements directly on the backend database. By carefully crafting input, attackers can manipulate queries to read, modify, or delete sensitive information, potentially bypass authentication mechanisms and gain unauthorized access to the system. Unlike Cross-Site Scripting, which targets client-side scripts and user browsers, Buffer Overflow attacks, which exploit memory allocation issues in applications, or Directory Traversal attacks, which allow access to files outside intended directories, SQL Injection specifically targets the database layer, making it particularly dangerous because it can compromise the integrity, confidentiality, and availability of critical data.

Attackers commonly exploit SQL Injection to extract sensitive data such as usernames, passwords, financial records, or personal information. In more advanced scenarios, they may escalate privileges, drop or alter database tables, or even execute commands on the underlying server. The consequences of a successful SQL Injection attack can be severe, including financial loss, regulatory non-compliance, reputational damage, and operational disruption. This makes it one of the most critical and prevalent threats in we( B ) application security.

Question 87

Which method authenticates users by analyzing physical characteristics such as fingerprints or retina patterns?

( A ) Token-Based Authentication
( B ) Biometric Authentication
( C ) Knowledge-Based Authentication
( D ) Multi-Factor Authentication

Answer: B

Explanation:

Biometric authentication is a security mechanism that verifies an individual’s identity by analyzing unique physiological or behavioral traits. Common biometric identifiers include fingerprints, facial features, retina or iris patterns, voiceprints, and even gait or typing patterns. Unlike token-based authentication, which depends on something the user physically possesses, or knowledge-based authentication, which relies on information such as passwords or PINs, biometrics leverages inherent characteristics that are generally unique to each individual. While multi-factor authentication combines two or more of these approaches, biometric authentication provides a distinct layer of security because it is tied directly to the person and is inherently difficult to duplicate or share.

The use of biometrics enhances both security and convenience, especially in environments where high assurance of identity is required, such as secure facilities, financial services, mobile devices, and government systems. Implementing biometric authentication requires careful attention to accuracy and reliability. Metrics like false acceptance rate (FAR), which measures the likelihood of unauthorized users being accepted, and false rejection rate (FRR), which measures the probability of rejecting authorized users, are critical considerations. Balancing these metrics ensures both usability and security.

Data protection is a fundamental aspect of biometric systems. Biometric templates must be encrypted and stored securely to prevent unauthorized access or misuse. Additionally, privacy concerns must be addressed to comply with regulatory frameworks and to maintain user trust. Biometric systems are not immune to threats such as spoofing attacks, sensor tampering, or template compromise. Attackers may attempt to replicate fingerprints, use high-resolution photographs, or exploit system vulnerabilities to bypass authentication.

Question 88

Which backup strategy involves copying all selected data at regular intervals and retaining multiple previous versions?

( A ) Full Backup
( B ) Differential Backup
( C ) Incremental Backup
( D ) Continuous Data Protection

Answer: D

Explanation:

Continuous Data Protection (CDP) is an advanced backup strategy designed to capture every change made to selected data in real-time or near real-time, creating a comprehensive record of data modifications over time. Unlike traditional full backups, which copy all data at a single point, differential backups that store changes since the last full backup, or incremental backups that record changes since the last backup of any type, CDP continuously monitors and records every data modification. This approach allows organizations to achieve highly granular recovery points, minimizing the risk of data loss and enabling precise restoration to any moment in time prior to an incident.

CDP works by tracking data changes as they occur, storing each modification in a secure repository. This ensures that every update, addition, or deletion is captured and can be recovered if needed. The key advantage of CDP is the ability to restore data almost instantaneously to a very specific state, reducing downtime and data loss in comparison to traditional backup methods. This makes it particularly valuable in environments where data integrity and availability are critical, such as financial institutions, healthcare systems, and large-scale enterprise IT infrastructures.

Successful implementation of CDP requires careful planning, including secure storage of backups, versioning to maintain historical records, and management of network bandwidth to ensure continuous replication does not interfere with normal operations. Organizations must also monitor storage capacity, validate the integrity of backups, and integrate CDP processes with broader disaster recovery and business continuity protocols. Regular testing and validation are essential to ensure that recovery procedures function correctly and that data can be restored rapidly in the event of ransomware attacks, accidental deletions, or system failures.

Question 89

Which security practice involves reviewing system and network logs to identify suspicious activity?

( A ) Penetration Testing
( B ) Log Analysis
( C ) Vulnerability Assessment
( D ) Risk Management

Answer: B

Explanation:

Log analysis is a critical process in cybersecurity and IT operations that involves the systematic review and examination of logs generated by systems, applications, and network devices. These logs record a wide range of activities, including user actions, system events, network traffic, and application behaviors. Unlike penetration testing, which actively probes systems to uncover vulnerabilities, vulnerability assessments that identify weaknesses, or risk management, which evaluates and mitigates potential threats, log analysis focuses on continuous monitoring and detection. Its primary goal is to identify anomalies, suspicious patterns, or potential security incidents as they occur, providing organizations with actionable intelligence for proactive defense.

The process of log analysis typically includes parsing logs to extract relevant information, correlating events across multiple systems to detect patterns, and generating alerts when behaviors deviate from established baselines. For example, repeated failed login attempts, unexpected privilege escalations, or unusual data transfers can indicate potential compromise. Modern Security Information and Event Management (SIEM) platforms enhance log analysis by aggregating log data from disparate sources, normalizing it, and providing real-time visibility. These platforms also enable automated responses to certain events, reducing response time and improving overall security posture.

Effective log analysis requires careful planning and configuration. Organizations must ensure that all critical systems are generating logs, define proper retention policies, and implement access controls to protect log integrity. Regular reviews of logs, correlation of seemingly unrelated events, and proactive threat hunting help uncover subtle indicators of compromise that might otherwise go unnoticed. Additionally, log analysis plays a crucial role in compliance, providing audit trails required by regulations such as PCI DSS, HIPAA, and GDPR.

Beyond detection, log analysis supports forensic investigations, helping security teams reconstruct incidents, determine the scope of a breach, and identify the root cause. Continuous monitoring, intelligent alerting, and integration with broader security operations are essential to ensure that log analysis is not merely a reactive measure but a proactive tool in maintaining operational security and resilience. By implementing a robust log analysis program, organizations strengthen incident response capabilities, reduce dwell time of threats, and enhance overall cybersecurity readiness.

Question 90

Which type of security testing involves authorized attempts to exploit vulnerabilities to evaluate system defenses?

( A ) Vulnerability Scanning
( B ) Penetration Testing
( C ) Security Auditing
( D ) Risk Assessment

Answer: B

Explanation: 

Penetration testing, often referred to as pen testing, is a proactive and systematic security assessment in which authorized security professionals attempt to exploit vulnerabilities within an organization’s systems, applications, or networks. The primary objective of penetration testing is to evaluate the effectiveness of existing security controls by simulating real-world attack scenarios. Unlike vulnerability scanning, which passively identifies weaknesses without actively exploiting them, security auditing, which reviews policies and procedures, or risk assessments, which focus on evaluating potential threats and their impact, penetration testing actively challenges system defenses to uncover exploitable flaws. This hands-on approach allows organizations to understand not just where weaknesses exist, but how they could be leveraged by attackers to compromise confidentiality, integrity, or availability of critical assets.

Penetration testing typically follows a structured methodology. It begins with reconnaissance to gather information about the target environment, followed by vulnerability identification using automated tools and manual techniques. The next stages involve exploitation, where testers attempt to leverage vulnerabilities to gain unauthorized access or escalate privileges, and post-exploitation, where the potential impact of successful attacks is assessed. The final phase involves detailed reporting, which documents findings, risk levels, potential business impact, and recommendations for remediation. By providing actionable insights, penetration testing allows organizations to prioritize mitigation efforts and strengthen defenses before attackers can exploit vulnerabilities.

Regular penetration testing is essential for maintaining a robust security posture. It validates the effectiveness of technical controls, such as firewalls, intrusion detection systems, and application defenses, while also highlighting gaps in policies, processes, and employee awareness. Penetration testing is particularly valuable for organizations seeking to comply with regulatory standards, such as PCI DSS, HIPAA, or ISO 27001, which often require evidence of proactive security evaluations.

Question 91

Which cryptographic method ensures that a message cannot be altered without detection?

( A ) Symmetric Encryption
( B ) Asymmetric Encryption
( C ) Digital Signatures
( D ) Hashing

Answer: C

Explanation:

Digital signatures are a fundamental component of modern cybersecurity, providing mechanisms for ensuring data integrity, authentication, and non-repudiation. They rely on cryptographic techniques to verify that a message or document has not been altered and to confirm the identity of the sender. Unlike symmetric encryption, which relies on a shared secret key to maintain confidentiality, asymmetric encryption, which uses public-private key pairs for secure communication, or hashing, which ensures data integrity without offering identity verification, digital signatures uniquely combine identity verification with tamper detection. This makes them particularly valuable in contexts where trust and authenticity are critical, such as financial transactions, legal documents, and software distribution.

The process of creating a digital signature involves several key steps. First, the original message or data is processed through a hash function to generate a unique fixed-length representation known as a message digest. This digest is then encrypted using the sender’s private key, producing the digital signature. When a recipient receives the signed message, they use the sender’s public key to decrypt the signature and obtain the message digest. By comparing this decrypted digest with a hash computed from the received message, the recipient can confirm both the integrity of the message and the authenticity of the sender. Any alteration of the message during transit would result in a mismatch, immediately signaling potential tampering.

Digital signatures play a crucial role in a wide array of applications. In secure email, they ensure that messages originate from the claimed sender and have not been modified in transit. In software distribution, they verify that software packages are authentic and free from malicious alterations. Financial institutions rely on digital signatures for secure transaction approvals, while legal frameworks increasingly recognize digitally signed documents as legally binding.

Effective implementation of digital signatures requires rigorous key management practices, including secure storage of private keys, proper certificate issuance and validation, and adherence to established cryptographic standards. Organizations must also ensure compliance with regulations such as eIDAS, PCI DSS, and other industry-specific requirements. By integrating digital signatures into their cybersecurity practices, organizations can prevent forgery, maintain trust, and provide verifiable proof of data origin and integrity, strengthening overall security and regulatory compliance.

Question 92

Which attack technique involves sending unexpected inputs to a system to disrupt its normal operation?

( A ) Buffer Overflow
( B ) Phishing
( C ) SQL Injection
( D ) Spoofing

Answer: A

Explanation: 

Buffer overflow attacks are a serious security vulnerability that occur when a program receives more data than it can safely handle in a designated memory buffer. This overflow can overwrite adjacent memory, potentially causing unpredictable behavior, system crashes, or even enabling attackers to execute arbitrary code. Unlike phishing attacks, which rely on deceiving users into revealing sensitive information, SQL injection, which exploits weaknesses in database queries, or spoofing, which involves impersonating trusted entities, buffer overflow attacks specifically target flaws in how software manages memory allocation. These attacks exploit the lack of proper input validation and insufficient memory boundary checks, making them particularly dangerous for programs written in low-level languages like C or C++, where direct memory manipulation is common.

The consequences of a successful buffer overflow can be severe. Attackers may gain unauthorized access to systems, inject malicious code, escalate privileges, or disrupt critical services. Certain types of buffer overflow, such as stack-based overflows, can allow attackers to overwrite the return address of a function, redirecting execution to injected malicious code. Heap-based overflows, on the other hand, can corrupt dynamically allocated memory, often leading to system instability or data leakage.

Mitigating buffer overflow vulnerabilities requires a combination of secure software development practices and system-level protections. Developers should implement rigorous input validation to ensure that user-provided data does not exceed expected sizes. Using secure coding techniques, such as bounds-checked functions and avoiding unsafe standard library routines, helps reduce exposure. Compiler-level protections, including stack canaries, Address Space Layout Randomization (ASLR), and Data Execution Prevention (DEP), provide additional layers of defense by making it harder for attackers to predict memory locations or execute injected code.

Organizations should also conduct regular code reviews, static and dynamic analysis, and penetration testing to identify potential vulnerabilities before deployment. Special attention should be given to legacy applications, embedded systems, and network-facing software, which often remain susceptible to these attacks due to outdated or insecure coding practices. By proactively addressing buffer overflow risks, organizations can prevent unauthorized access, mitigate malware deployment, and strengthen overall system security, thereby reducing potential operational and reputational damage.

Question 93

Which type of malware self-replicates and spreads without user interaction?

( A ) Trojan
( B ) Worm
( C ) Ransomware
( D ) Adware

Answer: B

Explanation: 

Worms are a type of malicious software capable of self-replication and autonomous spread across computer networks without requiring any user interaction. Unlike Trojans, which disguise themselves as legitimate programs to deceive users, ransomware, which encrypts data to demand payment for decryption, or adware, which delivers intrusive advertisements, worms focus primarily on exploiting system and network vulnerabilities to propagate rapidly. Once a worm infects a system, it can quickly move laterally through connected networks, consuming bandwidth, overloading servers, and spreading malicious payloads such as spyware, ransomware, or backdoors.

The self-replicating nature of worms makes them particularly dangerous because a single infected machine can compromise an entire network in a matter of minutes. They often exploit unpatched software vulnerabilities, weak configurations, or outdated operating systems. Worms like Code Red, SQL Slammer, and WannaCry have demonstrated the scale of damage possible, causing global outages, data loss, and billions of dollars in financial damages. These historical incidents emphasize the importance of proactive defense strategies and highlight how even minor lapses in security maintenance can result in widespread compromise.

Preventing and mitigating worm infections requires a multi-layered security approach. Effective measures include regular patch management to fix vulnerabilities that worms exploit, deploying intrusion detection and prevention systems to identify abnormal traffic behavior, and implementing firewalls to restrict unauthorized network communications. Network segmentation is also vital, as it helps contain potential infections within limited zones, preventing unrestricted spread across critical infrastructure.

Endpoint protection tools and real-time monitoring can help detect signs of worm activity, such as unusual file creation, rapid replication, or excessive bandwidth consumption. Additionally, maintaining updated antivirus definitions and conducting regular vulnerability scans ensure that emerging threats are identified early.

Organizations should also prioritize user awareness and response preparedness through regular training and incident response simulations. Having a well-defined response plan allows security teams to act swiftly to isolate infected systems, remove malicious code, and restore affected services. By combining preventive measures, continuous monitoring, and timely updates, organizations can significantly reduce the risk of worm outbreaks and protect their systems from the operational and financial impact of these highly contagious cyber threats.

Question 94

Which protocol is primarily used to securely manage network devices over an encrypted session?

( A ) Telnet
( B ) SSH
( C ) FTP
( D ) HTTP

Answer: B

Explanation:

Secure Shell (SSH) is a cryptographic network protocol designed to enable secure communication and remote management of network devices, servers, and systems. It provides a protected channel over an untrusted network, ensuring that both authentication and data transmission occur in an encrypted and confidential manner. Unlike Telnet, which transmits data, including credentials, in plaintext, File Transfer Protocol (FTP), which handles file transfers without encryption, and Hypertext Transfer Protocol (HTTP), which lacks built-in security, SSH offers robust protection through encryption and integrity verification. This prevents unauthorized users from intercepting, modifying, or reading sensitive information during transmission.

SSH operates by establishing an encrypted session between a client and a server, using either password-based authentication or cryptographic key pairs. In key-based authentication, a public key is stored on the server, while the private key remains securely on the client device. This method enhances security by eliminating the need to transmit passwords and by reducing the risk of brute-force attacks. SSH is widely utilized in enterprise environments for administrative access, system configuration, network troubleshooting, and secure file transfers using Secure File Transfer Protocol (SFTP) or Secure Copy Protocol (SCP). It also supports advanced capabilities such as port forwarding and tunneling, allowing secure transmission of other protocols through encrypted channels.

Maintaining the integrity of SSH environments requires diligent key management and secure configuration practices. Administrators should enforce key-based authentication instead of passwords, disable root or direct administrative logins, and use strong encryption algorithms. Implementing access controls, session timeout policies, and intrusion detection systems further strengthens protection against unauthorized access attempts. Regular software updates and patching are crucial to address emerging vulnerabilities that attackers might exploit.

Monitoring SSH activity through centralized logging and audit mechanisms helps detect suspicious login attempts or configuration changes in real time. Additionally, organizations should establish clear policies for managing SSH keys, including rotation, expiration, and revocation, to prevent misuse.

By combining these best practices, SSH becomes a cornerstone of secure remote administration and network management. It ensures that sensitive operations are performed safely, preserving the confidentiality, integrity, and availability of systems even when accessed over potentially insecure networks. As organizations continue to expand remote infrastructure, implementing and maintaining secure SSH configurations remains vital to safeguarding critical assets and preventing unauthorized control or data compromise.

Question 95

Which security control type is designed to alert administrators of suspicious activities or potential breaches?

( A ) Preventive Control
( B ) Detective Control
( C ) Corrective Control
( D ) Compensating Control

Answer: B

Explanation:

Detective controls are essential components of an organization’s security framework, designed to identify, record, and alert administrators to potential security incidents or abnormal activities within systems and networks. Their primary purpose is to detect and report, rather than prevent or correct, malicious or unauthorized actions. While preventive controls aim to stop attacks before they occur, corrective controls focus on restoring systems after an incident, and compensating controls act as temporary or alternative measures when primary controls are unavailable, detective controls specialize in observation and notification. They serve as the eyes and ears of an organization’s security infrastructure, providing visibility into what is happening across various environments.

Common examples of detective controls include intrusion detection systems (IDS), which monitor network traffic for signs of malicious activity, and log monitoring tools, which analyze system logs to identify irregular patterns. File integrity monitoring systems track changes to critical files, helping detect unauthorized modifications that might indicate a compromise. Security Information and Event Management (SIEM) systems bring these elements together by aggregating data from multiple sources, correlating events, and generating alerts for potential threats. These tools work best when properly configured with appropriate thresholds and tuning to reduce false positives and ensure meaningful alerts.

The effectiveness of detective controls depends heavily on continuous monitoring, timely analysis, and a well-defined response process. Continuous observation allows organizations to detect insider threats, malware infections, policy violations, and other suspicious behaviors before they escalate into major incidents. Integrating detective controls with incident response procedures ensures that alerts are acted upon quickly, minimizing potential damage. Proper tuning and prioritization of alerts help security teams focus on high-risk activities rather than being overwhelmed by noise.

Ultimately, detective controls complement preventive and corrective measures to create a layered defense strategy, often referred to as defense-in-depth. They provide organizations with situational awareness, helping them understand ongoing activities, uncover hidden threats, and improve overall resilience. By maintaining and refining these controls, organizations can respond proactively to emerging risks, ensuring stronger protection for their systems and data.

Question 96

Which cloud security model divides responsibilities between the service provider and the customer?

( A ) Shared Responsibility Model
( B ) Zero Trust Model
( C ) Defense-in-Depth Model
( D ) On-Premises Security Model

Answer: A

Explanation: 

The Shared Responsibility Model is a fundamental concept in cloud security that clearly defines how security duties are divided between cloud service providers and their customers. Its main purpose is to ensure that both parties understand their respective roles in protecting data, applications, and systems. This model differs from other security frameworks such as Zero Trust, which assumes no implicit trust among users or devices, Defense-in-Depth, which relies on multiple layers of security controls, and traditional on-premises security, where organizations retain full responsibility for all infrastructure and data protection measures. In the Shared Responsibility Model, responsibility is distributed according to the type of cloud service being used—whether it is Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS).

In this model, cloud providers are responsible for securing the underlying infrastructure that powers the cloud. This includes maintaining the physical data centers, network components, and the hardware and software that form the foundation of the service. Providers also handle aspects such as availability, redundancy, and physical access control. On the other hand, customers are responsible for protecting what they store and operate within the cloud environment. This typically includes managing user identities, enforcing strong authentication, configuring cloud resources securely, encrypting data, and monitoring workloads for suspicious activity.

Question 97

Which attack manipulates individuals into divulging confidential information or performing unsafe actions through deception?

( A ) Social Engineering
( B ) Brute Force
( C ) Cross-Site Request Forgery
( D ) Man-in-the-Middle

Answer: A

Explanation:

Social engineering is a cyberattack technique that focuses on exploiting human behavior and psychological manipulation rather than targeting technical weaknesses. Attackers use deception to trick individuals into revealing confidential information, sharing login credentials, or performing actions that compromise security systems. This method takes advantage of human emotions such as trust, fear, curiosity, or urgency to influence victims’ decisions. Unlike brute force attacks, which involve automated attempts to guess passwords, or cross-site request forgery, which manipulates authenticated we( B ) sessions, and man-in-the-middle attacks, which intercept communications, social engineering relies primarily on manipulating human judgment and trust.

There are several common forms of social engineering. Phishing is the most widespread, involving fraudulent emails or messages that appear to come from trusted sources and encourage recipients to click malicious links or provide sensitive data. Pretexting involves fabricating a believable scenario or identity to persuade a target to share information or grant access. Baiting uses enticing offers, such as free downloads or US( B ) drives, to lure victims into compromising their systems. Tailgating occurs when an unauthorized individual physically follows an employee into a restricted area. Phone-based scams, often called vishing, use voice communication to impersonate legitimate entities such as banks or support services.

Question 98

Which security model assumes no user or device is inherently trustworthy, requiring continuous verification?

( A ) Shared Responsibility Model
( B ) Zero Trust Model
( C ) Defense-in-Depth Model
( D ) Perimeter-Based Security Model

Answer: B

Explanation:

The Zero Trust Model is a modern cybersecurity framework built on the principle that no user, device, or network component should be automatically trusted. It operates under the assumption that threats may exist both inside and outside an organization’s network, meaning access should never be granted based solely on location or prior authentication. Instead, every access request must be continuously verified before permission is given. This approach differs from other security models such as the Shared Responsibility Model, which focuses on dividing security roles between providers and customers, Defense-in-Depth, which emphasizes layering multiple controls, and Perimeter-Based Security, which relies heavily on securing the boundaries of a network. Zero Trust eliminates the idea of a trusted internal network by enforcing strict verification for every connection attempt, regardless of origin.

At its core, the Zero Trust Model is based on several key principles: continuous authentication, least privilege access, micro-segmentation, and comprehensive monitoring. Continuous authentication ensures that identity and device credentials are constantly validated, not just at login. The least privilege principle limits users to the minimum level of access required for their tasks, reducing the potential impact of compromised accounts. Micro-segmentation divides networks into smaller, isolated zones, preventing attackers from moving freely if they gain initial access. Ongoing monitoring and traffic analysis detect anomalies and help respond quickly to suspicious activities.

Question 99

Which security process ensures that critical operations can continue during and after a disruption or disaster?

( A ) Business Continuity Planning
( B ) Incident Response Planning
( C ) Risk Assessment
( D ) Vulnerability Management

Answer: A

Explanation:

Business Continuity Planning, commonly known as BCP, is a systematic approach designed to ensure that an organization can continue essential operations during and after disruptive incidents. These disruptions can stem from a variety of sources, including natural disasters, cyberattacks, system failures, or other unexpected crises that threaten to halt normal business functions. The primary goal of BCP is to maintain the availability of critical services, protect vital data, and minimize downtime and financial loss. This approach differs from other security and resilience strategies such as Incident Response, which focuses specifically on addressing and containing security breaches, Risk Assessment, which identifies and evaluates potential threats, and Vulnerability Management, which aims to reduce weaknesses within systems. In contrast, BCP is broader in scope, emphasizing preparedness and continuity of operations rather than solely prevention or recovery.

A robust business continuity plan typically includes several core components. These include conducting a business impact analysis to identify which processes are most essential to the organization’s survival, developing contingency procedures to handle disruptions, and establishing alternative site arrangements in case primary facilities become unavailable. Regular staff training ensures that employees understand their roles and responsibilities during emergencies, while scheduled testing and simulations help verify that recovery strategies function effectively in real-world scenarios. Integrating disaster recovery plans, communication strategies, and supply chain continuity measures further strengthens the organization’s ability to withstand and recover from crises.

Effective business continuity planning also requires ongoing evaluation and updates. As technology, regulations, and business operations evolve, plans must be adjusted to remain relevant and effective. Management support and cross-departmental collaboration are critical for ensuring alignment with corporate objectives and fostering a culture of resilience. Investments in redundancy, automation, and crisis management tools can also enhance preparedness and reduce the impact of disruptions. Ultimately, a well-implemented BCP not only safeguards operational stability but also protects the organization’s reputation, revenue, and regulatory compliance. By anticipating potential disruptions and planning accordingly, businesses can ensure that mission-critical functions remain operational even under challenging circumstances.

Question 100

Which authentication approach combines multiple verification factors, such as something you know, have, or are?

( A ) Single-Factor Authentication
( B ) Two-Factor Authentication
( C ) Multi-Factor Authentication
( D ) Knowledge-Based Authentication

Answer: C

Explanation:

Multi-Factor Authentication, commonly referred to as MFA, is a security mechanism that enhances user authentication by requiring two or more independent factors before granting access to a system, application, or network. This layered verification process provides a much stronger defense compared to traditional authentication methods. Unlike Single-Factor Authentication, which depends on only one credential such as a password, Two-Factor Authentication, which requires two types of verification, or Knowledge-Based Authentication, which relies on user-provided information like security questions, MFA integrates multiple types of factors to establish a user’s identity more securely.

The authentication factors used in MFA generally fall into three main categories: knowledge, possession, and inherence. Knowledge refers to something the user knows, such as a password or PIN. Possession involves something the user has, such as a hardware token, smart card, or a mobile device used to receive a one-time passcode. Inherence refers to something the user is, typically verified through biometric data like fingerprints, facial recognition, or voice patterns. Combining these factors makes it exponentially more difficult for attackers to gain unauthorized access, even if one credential is compromised.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!