CompTIA SecurityX CAS-005 Exam Dumps and Practice Test Questions Set 8 Q 141- 160

Visit here for our full CompTIA SecurityX CAS-005 exam dumps and practice test questions.

Question 141

Which type of security control focuses on monitoring systems to detect and respond to potential threats in real time?

A) Preventive Control
B) Detective Control
C) Corrective Control
D) Administrative Control

Answer: B

Explanation:

Detective controls are an essential component of a robust cybersecurity strategy, designed to identify and alert organizations to unauthorized activities, system misuse, policy violations, or other security incidents as they occur. Unlike preventive controls, which aim to stop security events before they happen, corrective controls, which focus on mitigating the impact after an incident has occurred, or administrative controls, which rely on policies, procedures, and training, detective controls focus on observation, monitoring, and alerting. Their primary purpose is to provide visibility into ongoing operations and potential threats, enabling organizations to react in a timely and informed manner.

Detective controls encompass a wide range of technologies and processes, including intrusion detection systems (IDS), security information and event management (SIEM) platforms, network monitoring tools, file integrity checkers, and detailed audit logs. These systems continuously collect, analyze, and correlate data from various sources to detect anomalies, unusual behavior, or indicators of compromise. For example, an IDS may flag suspicious network traffic patterns that could indicate a malware infection or a brute-force attack, while SIEM systems can aggregate logs from multiple devices to identify trends and provide context for potential incidents. Regular audits of system access and activity logs also fall under detective controls, allowing organizations to spot unauthorized actions that might otherwise go unnoticed.

The effectiveness of detective controls relies not only on technology but also on proper configuration, continuous tuning, and skilled personnel to interpret alerts accurately. False positives must be minimized to ensure that true incidents receive timely attention, and response procedures should be clearly defined so that detected threats can be addressed promptly. Detective controls are vital for regulatory compliance, as many standards require organizations to monitor and document security-relevant events. They also provide situational awareness, helping organizations understand their risk landscape and prioritize resources for remediation. By implementing strong detective controls, organizations enhance their ability to identify potential security breaches early, limit potential damage, and maintain an overall resilient security posture, ultimately supporting both operational integrity and long-term risk management.

Question 142

Which security framework focuses on identifying, protecting, detecting, responding, and recovering from cybersecurity incidents?

A) ISO 27001
B) NIST Cybersecurity Framework
C) COBIT
D) HIPAA

Answer: B

Explanation: 

The NIST Cybersecurity Framework (CSF) is a widely recognized and comprehensive framework designed to help organizations manage and improve their cybersecurity posture in a structured and risk-based manner. It provides a flexible approach that can be tailored to organizations of all sizes and industries, making it particularly valuable for critical infrastructure and enterprises seeking to strengthen their cyber resilience. Unlike ISO 27001, which primarily focuses on establishing and maintaining an information security management system, COBIT, which emphasizes governance and IT management practices, or HIPAA, which regulates the privacy and security of healthcare data, the NIST CSF provides a holistic view of cybersecurity activities, emphasizing both proactive and reactive measures.

The framework is organized into five core functions: Identify, Protect, Detect, Respond, and Recover. The Identify function helps organizations gain a clear understanding of their assets, data, systems, and potential risks, which forms the foundation for all other security activities. Protect focuses on implementing safeguards to limit or prevent cyber incidents, including access control, training, and data security measures. Detect emphasizes timely discovery of cybersecurity events through continuous monitoring, threat intelligence, and anomaly detection. Respond addresses the processes required to contain and mitigate the impact of an incident, such as incident response planning, communication protocols, and mitigation procedures. Finally, the Recover function ensures that organizations can restore operations and services after a security event, supporting business continuity and lessons learned to improve future resilience.

By adopting the NIST CSF, organizations can systematically assess their cybersecurity maturity, identify gaps, and prioritize improvements based on risk. The framework encourages the integration of cybersecurity practices into daily operations, aligning security objectives with overall business goals. It also provides a common language for cybersecurity, facilitating communication between technical teams, management, and external stakeholders. Regular use of the framework supports continuous improvement, risk-informed decision-making, and a proactive stance against evolving cyber threats. Overall, NIST CSF enhances organizational resilience, strengthens security governance, and ensures a structured approach to managing cybersecurity risks while maintaining operational effectiveness and regulatory alignment.

Question 143

Which type of malware replicates itself across networks and devices without user intervention?

A) Trojan
B) Worm
C) Ransomware
D) Spyware

Answer: B

Explanation: 

Worms are a type of self-replicating malware that can spread autonomously across networks and systems without any user intervention. Unlike Trojans, which disguise themselves as legitimate applications to trick users into executing them, ransomware, which encrypts data and demands payment for its release, or spyware, which secretly monitors user activity, worms are designed primarily for rapid propagation. They exploit vulnerabilities in operating systems, software applications, or network protocols, allowing them to move from one system to another without human interaction. This capability makes worms particularly dangerous, as they can quickly infect large networks, overwhelming bandwidth, degrading system performance, and potentially creating opportunities for additional attacks.

Once a worm infects a system, it may carry payloads that further compromise security, such as installing backdoors, creating botnets, or facilitating the exfiltration of sensitive data. Unlike other malware that relies on user action, worms can replicate independently, making detection and containment more challenging. Historical examples, such as the WannaCry and Conficker worms, demonstrate the widespread disruption and financial losses that uncontrolled worm outbreaks can cause, highlighting the importance of proactive security measures.

Effective defense against worms requires a multi-layered approach. Timely patch management is essential to close the vulnerabilities that worms exploit. Firewalls, intrusion detection and prevention systems, and network segmentation can limit propagation and isolate infected systems. Endpoint protection, including antivirus and behavioral monitoring, helps detect and remediate infections early. Additionally, user education about safe computing practices, even though worms may not require interaction, supports overall cybersecurity awareness and vigilance.

Organizations must also incorporate worm-specific scenarios into their incident response planning. Rapid identification, containment, and remediation are crucial to minimizing operational disruption. Regular monitoring of network traffic, anomaly detection, and vulnerability assessments help detect early signs of worm activity. By combining these technical controls with proactive policies and procedures, organizations can mitigate the risk of worm infections, protect critical assets, and reduce the operational, financial, and reputational impact of these rapidly spreading threats.

Question 144

Which access control model is based on predefined security labels assigned to resources and users?

A) Discretionary Access Control (DAC)
B) Mandatory Access Control (MAC)
C) Role-Based Access Control (RBAC)
D) Attribute-Based Access Control (ABAC)

Answer: B

Explanation: 

Mandatory Access Control (MAC) is a stringent access control model in which access decisions are determined based on predefined security labels assigned to both users and resources. Unlike Discretionary Access Control (DAC), where resource owners have the flexibility to assign permissions, Role-Based Access Control (RBAC), which grants access according to predefined roles, or Attribute-Based Access Control (ABAC), which evaluates attributes dynamically to make access decisions, MAC centralizes authority and enforces policies that cannot be overridden by individual users. This model is particularly suitable for environments that demand high levels of security, such as military systems, government agencies, intelligence operations, and critical infrastructure, where the protection of sensitive information is paramount.

In a MAC environment, both users and resources are assigned security labels, such as classifications or clearance levels, and access decisions are made by comparing these labels. The operating system or application enforces these rules rigorously, ensuring that no user can access information for which they do not have the appropriate clearance. This centralized enforcement mitigates the risk of insider threats, accidental data exposure, or misconfigurations that could compromise security. Because users cannot alter access permissions, MAC provides consistent and predictable control over sensitive data, which is essential for highly regulated industries.

Implementing MAC effectively involves several critical steps. Organizations must classify their data accurately, assigning labels that reflect sensitivity and regulatory requirements. Users must also be categorized according to clearance levels or roles, and access policies must be enforced through robust technical mechanisms, including operating system controls, application-level permissions, and network security configurations. In addition, organizations should regularly audit and review access policies to ensure that classifications remain accurate and security labels are correctly enforced.

Question 145

Which type of cryptography uses a single key for both encryption and decryption?

A) Asymmetric Cryptography
B) Symmetric Cryptography
C) Hashing
D) Public Key Infrastructure

Answer: B

Explanation: 

Symmetric cryptography is a method of securing information by using a single shared secret key for both encryption and decryption processes. This means that the same key is used to transform plaintext into ciphertext and then back into readable plaintext, making the secure management of the key critical to the security of the system. Unlike asymmetric cryptography, which relies on a key pair consisting of a public key for encryption and a private key for decryption, symmetric cryptography is generally faster and more efficient, especially for encrypting large volumes of data. It also differs from hashing, which produces a fixed-length digest that cannot be reversed to reveal the original data, and from Public Key Infrastructure (PKI), which focuses on managing digital certificates and key distribution rather than performing bulk encryption.

The speed and efficiency of symmetric cryptography make it ideal for applications where performance is critical. Common symmetric algorithms include AES (Advanced Encryption Standard), DES (Data Encryption Standard), Triple DES, and ChaCha20, each providing different levels of security and performance. Symmetric encryption is widely used in practical applications such as virtual private networks (VPNs), secure file transfers, full disk encryption, database encryption, and other systems requiring rapid processing of sensitive information. These algorithms ensure that data remains confidential while in transit or at rest, provided that the secret key is protected.

One of the main challenges of symmetric cryptography is key management. The shared secret key must be distributed securely to all authorized parties without interception by attackers, and it must be stored safely to prevent unauthorized access. Improper key handling can compromise the entire encryption scheme. To address this, many systems combine symmetric and asymmetric cryptography, using asymmetric encryption to securely exchange symmetric session keys while taking advantage of symmetric encryption for high-speed bulk data protection.

Question 146

Which type of attack manipulates a user into performing unintended actions by exploiting the trust of a web application?

A) Cross-Site Scripting (XSS)
B) Cross-Site Request Forgery (CSRF)
C) SQL Injection
D) Session Hijacking

Answer: B

Explanation: 

Cross-Site Request Forgery, commonly known as CSRF, is a web-based attack that exploits the trust a web application places in a user’s browser. In a CSRF attack, an attacker tricks a user into performing actions on a web application without their knowledge or consent, leveraging the user’s authenticated session. This type of attack differs from other common web threats. For instance, Cross-Site Scripting (XSS) focuses on injecting malicious scripts into web pages to steal data or manipulate content, SQL Injection targets databases by altering queries to access or modify sensitive data, and Session Hijacking involves stealing or intercepting session credentials to impersonate a user. In contrast, CSRF relies on the user’s existing privileges to perform unauthorized actions, such as changing account settings, transferring funds, or modifying user data.

CSRF attacks often employ techniques like embedding malicious links in emails or social media posts, creating hidden forms that automatically submit requests, or using JavaScript to trigger actions on the victim’s behalf. Because the browser automatically includes cookies and session tokens with legitimate requests, the web application may not distinguish between authorized user actions and malicious requests generated by the attacker. This makes CSRF particularly dangerous in applications that do not implement proper request validation mechanisms.

Mitigation strategies for CSRF focus on ensuring that every sensitive action is explicitly authorized. Anti-CSRF tokens are widely used, which are unique, unpredictable values associated with a user session and included in requests to verify their authenticity. Enforcing same-origin policies helps ensure that requests come from trusted domains. Additional measures include validating custom request headers, requiring explicit user interactions such as confirmation prompts before sensitive operations, and limiting session lifetimes to reduce the window of opportunity for exploitation.

Question 147

Which type of backup stores only data that has changed since the last backup?

A) Full Backup
B) Incremental Backup
C) Differential Backup
D) Mirror Backup

Answer: B

Explanation: 

Incremental backups are a type of data backup strategy that focuses on efficiency by saving only the changes made since the last backup of any type, whether full or incremental. This approach is distinct from other backup methods. Full backups copy all data from the system, creating a complete snapshot each time, which can be time-consuming and resource-intensive, especially for large datasets. Differential backups, on the other hand, store all changes made since the last full backup, resulting in larger backup files over time but simplifying restoration. Mirror backups create exact duplicates of the data, offering real-time replication but requiring substantial storage and often lacking versioning capabilities. Incremental backups stand out because they significantly reduce storage requirements and minimize the time needed for each backup operation, making them highly suitable for environments with large volumes of data or frequent changes.

One of the critical considerations with incremental backups is the restoration process. Since each incremental backup only contains changes since the previous backup, recovery requires first restoring the most recent full backup, followed by applying each incremental backup in chronological order. This sequential restoration ensures that all modifications are captured, but it also means that the failure or corruption of any incremental backup in the chain can complicate recovery. Therefore, careful management and verification of backup integrity are essential.

Organizations typically implement incremental backups as part of a broader, multi-tiered backup strategy. This may include combining full backups on a weekly basis with daily incremental backups and periodic differential backups to balance recovery speed, storage efficiency, and operational overhead. Automated backup scheduling, encrypted storage, and offsite replication further enhance reliability, security, and compliance. Regular testing of backup restoration processes ensures that data recovery will be successful in the event of system failures, cyberattacks, or accidental deletions. By using incremental backups effectively, organizations can maintain robust data protection while minimizing the impact on network performance and storage resources, ensuring continuity and operational resilience.

Question 148

Which cybersecurity principle ensures that users have access only to the information necessary to perform their job functions?

A) Principle of Least Privilege
B) Separation of Duties
C) Non-Repudiation
D) Defense in Depth

Answer: A

Explanation: 

The Principle of Least Privilege is a fundamental security concept that ensures users, systems, and processes are granted only the minimum access necessary to perform their required functions. By limiting permissions, organizations can significantly reduce the potential damage caused by compromised accounts, accidental misuse, or insider threats. Unlike Separation of Duties, which distributes responsibilities among multiple individuals to prevent fraud or error, Non-Repudiation, which provides proof of origin and integrity for communications, or Defense in Depth, which relies on multiple layers of security controls, the Principle of Least Privilege focuses specifically on restricting operational access to systems and data. Its primary goal is to minimize the exposure of sensitive information and critical systems, thereby reducing risk across the enterprise.

Implementing least privilege requires careful planning and continuous management. Role-based access control is a common method, assigning permissions according to job responsibilities rather than granting broad access. Periodic access reviews help ensure that users maintain only the privileges they currently need, while formal access request and approval processes prevent unnecessary access from being granted without oversight. Monitoring and auditing of privileged accounts are also essential components, enabling security teams to detect unusual or unauthorized activities that could indicate misuse or compromise. Temporary privilege escalation mechanisms, such as just-in-time access, further enhance security by allowing elevated permissions only when absolutely necessary and for limited periods.

Adopting the Principle of Least Privilege provides multiple benefits beyond risk reduction. It strengthens regulatory compliance by demonstrating controlled and documented access to sensitive data, enhances audit capabilities through detailed tracking of who accessed what and when, and supports incident response by narrowing the scope of potential vulnerabilities. Automation and identity governance tools are often employed to enforce least privilege at scale, managing access across enterprise systems, cloud platforms, and endpoints efficiently. When implemented effectively, least privilege contributes to a resilient security posture, reducing attack surfaces, protecting organizational assets, and ensuring that access is aligned with operational necessity rather than convenience.

Question 149

Which type of wireless security protocol uses dynamic encryption keys and improved authentication compared to WEP?

A) WPA2
B) WEP
C) WPA
D) WPA3

Answer: A

Explanation:

WPA2, or Wi-Fi Protected Access 2, is a widely used security protocol that enhances the protection of wireless networks through the use of strong encryption and dynamic key management. It was introduced to address the vulnerabilities of earlier Wi-Fi security standards such as WEP, which could be easily compromised due to weak encryption methods, and WPA, which relied on TKIP (Temporal Key Integrity Protocol) and provided a lower level of security. WPA2 replaces these older protocols with AES (Advanced Encryption Standard), a robust encryption algorithm that ensures both data confidentiality and integrity, making it significantly more secure against eavesdropping and unauthorized access.

One of the key features of WPA2 is its support for dynamic key management. Rather than relying on a fixed pre-shared key, WPA2 generates unique session keys for each device on the network, which are frequently updated. This approach reduces the risk of replay attacks and limits potential exposure if a single key is compromised. In enterprise environments, WPA2 often integrates with 802.1X authentication and RADIUS servers, enabling centralized access control and stronger user verification. This combination ensures that only authorized users can connect to the network while maintaining a high level of security for transmitted data.

WPA2 is implemented in both consumer and enterprise wireless networks to secure communications, prevent unauthorized network access, and protect sensitive information from interception. Proper configuration is essential for its effectiveness, including strong passphrases for personal networks, regular password rotation, and monitoring for unusual activity. While WPA3 has introduced additional security enhancements such as SAE (Simultaneous Authentication of Equals) and improved protection against brute-force attacks, WPA2 remains widely deployed due to its compatibility with a broad range of devices and operating systems.

Question 150

Which type of attack involves intercepting communication between two parties to eavesdrop, alter, or impersonate messages?

A) Man-in-the-Middle
B) Phishing
C) Brute Force
D) Denial of Service

Answer: A

Explanation:

Man-in-the-Middle (MITM) attacks are a form of cyberattack in which an attacker secretly intercepts, alters, or relays communication between two parties without their knowledge. The attacker positions themselves between the sender and receiver, allowing them to eavesdrop on sensitive information, manipulate messages, or inject malicious content. Unlike phishing attacks, which rely on deceiving users into divulging credentials, brute force attacks that target passwords, or denial of service attacks that aim to disrupt service availability, MITM attacks specifically compromise the confidentiality and integrity of data as it travels between endpoints.

MITM attacks often exploit vulnerabilities in networks, weak encryption protocols, or flaws in communication systems. Common techniques include ARP (Address Resolution Protocol) spoofing, where attackers trick devices into routing traffic through their system; DNS (Domain Name System) poisoning, which redirects users to malicious sites; HTTPS stripping, which downgrades secure connections to unencrypted HTTP; and eavesdropping on unsecured Wi-Fi networks. By exploiting these weaknesses, attackers can capture login credentials, financial information, private communications, and other sensitive data without the victims’ awareness.

Mitigating MITM attacks requires a combination of technical controls and user awareness. Implementing strong encryption standards such as TLS/SSL for web traffic ensures that intercepted data cannot be easily read or modified. Mutual authentication and secure key exchange mechanisms further protect against unauthorized interception. Using virtual private networks (VPNs) encrypts traffic over potentially insecure networks, adding an additional layer of protection. Network monitoring, intrusion detection systems, and proper certificate validation help organizations detect suspicious activity that may indicate a MITM attempt.

Question 151

Which authentication method relies on a physical token combined with a PIN to grant access?

A) Biometric Authentication
B) Smart Card Authentication
C) Password Authentication
D) Single Sign-On

Answer: B

Explanation: 

Smart card authentication is a secure method of verifying user identity that relies on a physical card containing an embedded microchip, often used in combination with a personal identification number (PIN). This approach provides two-factor authentication, combining something the user possesses—the smart card—with something the user knows—the PIN. This dual-factor model significantly enhances security by mitigating risks associated with stolen, lost, or easily guessed passwords. Unlike biometric authentication, which relies on inherent physical traits such as fingerprints or facial recognition, smart card authentication does not require specialized biological verification. It also differs from password-based authentication, which relies solely on knowledge credentials and is more vulnerable to attacks like phishing or credential stuffing. Unlike single sign-on systems, which streamline authentication across multiple applications, smart cards provide direct verification of identity for secure access to systems and services.

Smart cards are capable of storing digital certificates, cryptographic keys, and authentication credentials, making them versatile tools for secure logins, email encryption, and digital signatures. The embedded chip allows for cryptographic operations to be performed securely on the card itself, protecting sensitive information from exposure even if the card is lost. Deploying smart card authentication requires appropriate infrastructure, including card readers, middleware to manage interactions between the card and applications, and integration with certificate authorities for issuance and management of digital certificates. These components ensure that only authorized users with valid cards and PINs can access protected resources.

Smart card solutions are widely adopted in government, enterprise, and financial environments where strong authentication is critical. They are particularly effective in scenarios requiring compliance with stringent security policies, such as accessing classified data, approving financial transactions, or signing legal documents digitally. Integration with public key infrastructure (PKI) further strengthens security by enabling encryption, secure communications, and verifiable digital signatures. By combining physical and knowledge-based verification, smart card authentication reduces the risk of unauthorized access, supports regulatory compliance, and enhances overall cybersecurity posture, making it a reliable and robust authentication method for high-security environments.

Question 152

Which type of attack floods a target system with traffic to exhaust resources and disrupt service availability?

A) Man-in-the-Middle
B) Denial-of-Service
C) SQL Injection
D) Session Hijacking

Answer: B

Explanation: 

A Denial-of-Service (DoS) attack is a type of cyberattack designed to disrupt the normal functioning of a system, application, or network by overwhelming it with an excessive amount of traffic or resource requests. The goal of such attacks is to make resources unavailable to legitimate users, leading to service interruptions, degraded performance, or complete system failure. Unlike Man-in-the-Middle attacks, which intercept or manipulate communications, SQL Injection attacks, which exploit vulnerabilities in databases, or Session Hijacking, which compromises user sessions, DoS attacks focus primarily on the availability of resources rather than on data confidentiality or integrity.

DoS attacks can take various forms, including SYN floods, where attackers exploit the TCP handshake process to consume server resources; ICMP floods, which overwhelm systems with ping requests; and HTTP request floods, which target web servers with large numbers of seemingly legitimate requests. These attacks can be further intensified through Distributed Denial-of-Service (DDoS) attacks, where multiple compromised systems, often part of a botnet, simultaneously generate traffic to amplify the disruption. DDoS attacks are particularly challenging to mitigate because the traffic originates from numerous sources, making it difficult to differentiate legitimate from malicious requests.

Organizations employ a combination of technical and procedural measures to defend against DoS and DDoS attacks. Technical defenses include intrusion prevention systems, firewalls with rate-limiting capabilities, redundant network architectures, cloud-based traffic scrubbing services, and anomaly detection systems that identify unusual patterns of activity. Procedural measures include preparing incident response plans that outline steps for detecting, mitigating, and recovering from attacks, as well as establishing communication channels with internet service providers and law enforcement agencies when necessary.

Question 153

Which type of malware is designed to remain hidden and collect sensitive information from a system?

A) Trojan
B) Spyware
C) Ransomware
D) Worm

Answer: B

Explanation: 

Spyware is a type of malicious software specifically designed to operate stealthily on a computer or network with the primary purpose of gathering sensitive information without the knowledge or consent of the user. It can collect a wide range of data, including login credentials, credit card numbers, browsing history, keystrokes, system configurations, and other personal or corporate information. Unlike Trojans, which typically masquerade as legitimate applications to trick users into executing them, ransomware, which encrypts files to demand payment, or worms, which self-replicate to spread across networks, spyware focuses on covert observation and information gathering rather than immediate disruption or overt damage.

Spyware can enter systems through multiple vectors, including phishing emails, malicious downloads, software bundles, infected websites, or exploit kits that take advantage of system vulnerabilities. Once installed, spyware can remain undetected for extended periods because it often avoids causing noticeable performance issues, which allows attackers to continuously gather data over time. This hidden nature makes detection challenging, requiring specialized tools and proactive security measures to identify and remove it effectively.

Organizations and individuals employ several strategies to combat spyware. Anti-spyware tools and endpoint detection and response (EDR) systems are critical for detecting and neutralizing spyware before it causes significant harm. Regular system scanning, timely application and operating system updates, and secure browsing practices reduce the risk of infection. Additionally, user education plays a vital role in preventing spyware installation by promoting awareness of suspicious links, downloads, and email attachments.

Question 154

Which type of firewall filters traffic based on source and destination IP addresses, ports, and protocols?

A) Packet-Filtering Firewall
B) Stateful Firewall
C) Application Layer Firewall
D) Next-Generation Firewall

Answer: A

Explanation: 

Packet-filtering firewalls are a foundational type of network security device that control the flow of traffic by examining packets at the network and transport layers. They make decisions based on key attributes such as source and destination IP addresses, port numbers, and the protocol type. This allows them to either permit or block packets according to a set of predefined rules established by network administrators. Packet-filtering firewalls operate efficiently because they inspect only the packet headers rather than the full content, making them relatively lightweight in terms of resource consumption and capable of handling high volumes of traffic with minimal latency.

Unlike stateful firewalls, which maintain context by tracking the state of active connections and can make decisions based on the state of a session, packet-filtering firewalls do not keep track of connection status. They also differ from application layer firewalls, which inspect the contents of application-level messages to detect malicious payloads or commands, and from next-generation firewalls, which combine features such as intrusion prevention, deep packet inspection, and application awareness. Because packet filters do not examine packet payloads, they are limited in detecting attacks that exploit application-level vulnerabilities, protocol abuses, or sophisticated spoofing techniques.

Despite these limitations, packet-filtering firewalls remain a critical first line of defense for network security. They are often deployed at network perimeters or between internal network segments to enforce access control policies, regulate traffic flow, and prevent unauthorized access. Effective deployment includes careful rule configuration, logging, and continuous monitoring to ensure rules remain aligned with security policies and evolving network conditions.

Question 155

Which principle ensures that critical data remains accurate, complete, and unaltered during storage and transmission?

A) Confidentiality
B) Integrity
C) Availability
D) Authentication

Answer: B

Explanation: 

Integrity in information security refers to the assurance that data remains accurate, consistent, and reliable throughout its lifecycle, whether it is stored, processed, or transmitted. Maintaining integrity ensures that information cannot be altered, corrupted, or tampered with, either intentionally by malicious actors or accidentally due to system errors or human mistakes. This concept is distinct from confidentiality, which focuses on limiting access to information, availability, which ensures data is accessible when needed, and authentication, which verifies the identity of users or systems. Integrity is foundational to trust, as it guarantees that the information relied upon for decision-making, reporting, or operational activities is correct and complete.

Several technical mechanisms support the enforcement of integrity. Hashing algorithms, such as SHA-256 or SHA-3, generate fixed-length digests that uniquely represent data, allowing verification that information has not been modified. Digital signatures combine cryptographic hashing with public-key infrastructure to provide non-repudiation while confirming that the data originates from a trusted source and has remained unchanged. Cryptographic checksums and error-detection codes help identify accidental data corruption during transmission or storage. Version control systems also play a critical role by tracking changes to files or datasets, enabling recovery to previous trusted states if unauthorized or erroneous modifications occur.

Organizations complement technical measures with procedural safeguards to maintain data integrity. Access controls, including role-based or attribute-based permissions, limit the ability of unauthorized users to modify sensitive information. Security policies, auditing, and continuous monitoring provide visibility into changes and ensure compliance with organizational standards and regulatory requirements. Logging and verification processes help detect anomalies and enable timely response to integrity violations.

Question 156

Which attack injects malicious SQL commands into input fields to manipulate a database?

A) Cross-Site Scripting
B) SQL Injection
C) Buffer Overflow
D) Directory Traversal

Answer: B

Explanation: 

SQL Injection is a type of attack that targets vulnerabilities in web applications by exploiting insufficient input validation to execute malicious SQL statements on a backend database. Unlike Cross-Site Scripting, which focuses on injecting scripts into user interfaces, Buffer Overflow, which manipulates memory to alter program execution, or Directory Traversal, which accesses restricted file paths, SQL Injection directly affects the database layer, potentially compromising the confidentiality, integrity, and availability of stored data. Attackers can leverage SQL Injection to bypass authentication mechanisms, retrieve sensitive information such as user credentials or financial data, modify records, delete critical entries, or even escalate privileges within the system. This makes SQL Injection one of the most severe and persistent threats to web applications and enterprise databases.

The root cause of SQL Injection usually stems from improper handling of user inputs. Applications that dynamically construct SQL queries using untrusted input without proper sanitization are particularly vulnerable. Attackers can craft input strings that alter the intended SQL logic, enabling them to gain unauthorized access or manipulate data. The consequences of a successful SQL Injection attack can range from data leakage and regulatory violations to complete compromise of the application or the underlying database infrastructure, resulting in operational disruption and reputational damage.

Mitigating SQL Injection involves multiple defensive strategies. Input validation is critical, ensuring that user-supplied data conforms to expected types, lengths, and formats. Using parameterized queries and prepared statements prevents the execution of injected SQL commands by separating data from code. Stored procedures and the principle of least privilege for database accounts further reduce potential attack surfaces. Web application firewalls can provide an additional layer of protection by detecting and blocking malicious SQL patterns. Regular security testing, including code reviews, static and dynamic application security testing (SAST and DAST), and penetration testing, is essential for identifying vulnerabilities before attackers can exploit them.

Question 157

Which type of access control allows users to make decisions about who can access their resources?

A) Discretionary Access Control (DAC)
B) Mandatory Access Control (MAC)
C) Role-Based Access Control (RBAC)
D) Attribute-Based Access Control (ABAC)

Answer: A

Explanation:

Discretionary Access Control (DAC) is an access management model in which the owner of a resource, such as a file, folder, or system object, has the authority to determine who is allowed to access it and what level of permissions they are granted. Unlike Mandatory Access Control, which enforces access decisions based on fixed security labels assigned to both users and resources, Role-Based Access Control (RBAC), which grants permissions according to pre-defined roles within an organization, or Attribute-Based Access Control (ABAC), which evaluates dynamic attributes like time, location, or device for access decisions, DAC places control directly in the hands of resource owners. This flexibility allows for quick and personalized access management, making DAC suitable for collaborative environments where users need the ability to share resources freely.

DAC is typically implemented using mechanisms such as Access Control Lists (ACLs), which explicitly define the users and groups permitted to perform specific actions on a resource. Each ACL entry specifies a subject, such as a user or group, and the associated permissions, such as read, write, or execute. While this model supports user convenience and adaptability, it also introduces security risks if resource owners do not apply consistent access management practices. Improper permission assignment, over-sharing, or failure to adhere to the principle of least privilege can result in unauthorized access, data leaks, or inadvertent exposure of sensitive information.

To address these risks, organizations implement auditing and monitoring to track access patterns and detect anomalies. Regular reviews of permissions and ACLs help ensure that access aligns with current organizational needs and compliance requirements. Additionally, access management policies, user education, and governance frameworks guide resource owners in making informed decisions regarding resource sharing. Combining DAC with other security controls, such as RBAC for critical systems or ABAC for dynamic access evaluation, can create a layered security approach that balances flexibility with control.

Ultimately, while DAC provides a user-friendly and adaptable model for access control, its effectiveness depends heavily on oversight, proper governance, and adherence to security best practices. By implementing monitoring, auditing, and training, organizations can leverage DAC’s flexibility without compromising the confidentiality, integrity, or availability of sensitive data.

Question 158

Which protocol is commonly used to securely transmit email over the Internet?

A) SMTP
B) IMAP
C) POP3
D) SMTPS

Answer: D

Explanation: 

SMTPS, or Secure Simple Mail Transfer Protocol, is an extension of the standard SMTP protocol that adds encryption using SSL/TLS to ensure the secure transmission of email messages between clients and servers, as well as between mail servers. While standard SMTP sends email in plaintext, which can be intercepted or altered by attackers, SMTPS encrypts the communication channel, providing confidentiality and integrity for messages in transit. This means that sensitive information, such as financial data, personal identifiers, or confidential business communications, is protected against eavesdropping, tampering, and man-in-the-middle attacks. Unlike IMAP, which focuses on retrieving emails and synchronizing folders across devices, or POP3, which downloads emails to a client, SMTPS specifically addresses the secure transport of emails, preventing unauthorized parties from reading or modifying content during transmission.

Implementing SMTPS requires proper management of SSL/TLS certificates on mail servers to establish trust and secure connections. Configuring mail servers to enforce TLS and disable insecure fallback protocols is essential to maintain strong encryption. Administrators must also ensure that email clients support SMTPS connections and are properly configured to use encrypted ports, commonly 465 or 587, depending on the server setup. In addition to securing the transport layer, organizations often implement complementary measures such as end-to-end encryption using standards like S/MIME or PGP to further protect the content of emails even if the transport layer is compromised.

Beyond encryption, SMTPS supports compliance with regulatory standards and data protection policies, which require secure handling of sensitive information. Monitoring and logging of email traffic over SMTPS help detect suspicious activity, misconfigurations, or potential security incidents. Organizations can also integrate phishing detection, spam filtering, and advanced threat protection to enhance overall email security.

Question 159

Which technique is used to make encrypted data unreadable to unauthorized users while allowing authorized users to decrypt it?

A) Hashing
B) Encryption
C) Salting
D) Tokenization

Answer: B

Explanation: 

Encryption converts plaintext into ciphertext using cryptographic algorithms and keys, rendering data unreadable to unauthorized users. Unlike A hashing, which produces a one-way digest, C salting, which strengthens hashes, and D tokenization, which replaces sensitive data with surrogate tokens, encryption is reversible for authorized parties holding the correct decryption key. Encryption secures data at rest, in transit, and in use, preventing unauthorized access and ensuring confidentiality. Modern encryption uses algorithms like AES, RSA, or ECC, combined with proper key management and access controls. Organizations enforce encryption policies across endpoints, networks, and storage systems to protect sensitive information, comply with regulations, and maintain trust. Proper encryption implementation requires understanding key lifecycle management, algorithm strength, and integration into applications and systems. Encryption is a cornerstone of data security and a critical tool for mitigating breaches and maintaining privacy.

Question 160

Which principle involves dividing critical tasks among multiple individuals to prevent fraud or errors?

A) Principle of Least Privilege
B) Separation of Duties
C) Defense in Depth
D) Accountability

Answer: B

Explanation: 

Separation of Duties (SoD) ensures that no single individual has control over all aspects of a critical process, reducing the risk of fraud, errors, or malicious activity. Unlike A Principle of Least Privilege, which limits access, C Defense in Depth, which layers security controls, and D Accountability, which tracks actions, SoD enforces checks and balances. Implementation involves distributing responsibilities across personnel, systems, or processes. For example, in financial transactions, one person might initiate a payment while another approves it. SoD reduces exposure to insider threats, enhances process integrity, and supports compliance with regulatory standards. Organizations implement automated workflow systems, role definitions, approval processes, and audit trails to enforce SoD effectively. SoD is critical for financial, operational, and IT governance, ensuring that systems and processes are secure, transparent, and resilient against misuse.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!