Visit here for our full CompTIA SecurityX CAS-005 exam dumps and practice test questions.
Question 1
Which of the following best describes the primary purpose of a third-party vendor risk assessment within an enterprise security programme?
( A ) To mandate the vendor’s adherence to the internal network segmentation architecture
( B ) To evaluate and quantify the risk that the vendor’s systems pose to the organisation’s confidentiality, integrity, and availability of assets
( C ) To ensure the vendor uses the same brand of endpoint protection software as the internal organisation
( D ) To restrict the vendor’s access to all systems unless physically connected on‐premises
Answer: B
Explanation:
A third-party vendor risk assessment is a critical and foundational component of governance, risk management, and compliance (GRC) programs within any organization. Modern enterprises increasingly rely on third-party vendors and external service providers to deliver a variety of services, ranging from cloud hosting, software development, and data storage to logistics, consulting, and specialized operational support. While these vendors provide essential business functions, they also introduce additional risk into the organizational ecosystem. A vendor risk assessment is the structured process through which an organization identifies, evaluates, and quantifies the potential risks associated with outsourcing certain functions or relying on external partners. The goal is not only to identify potential threats but also to provide a framework for mitigating these risks and ensuring that vendors align with the organization’s security, regulatory, and operational requirements.
The primary purpose of a third-party vendor risk assessment is to evaluate the impact that a vendor’s systems, processes, and personnel may have on the confidentiality, integrity, and availability of the organization’s information, assets, and operational capabilities. Confidentiality ensures that sensitive data is accessible only to authorized personnel and that private information is protected from unauthorized disclosure. Integrity focuses on maintaining the accuracy, consistency, and trustworthiness of data throughout its lifecycle, ensuring that no unauthorized alterations occur. Availability ensures that the organization’s critical systems and data remain accessible and functional when neede( D ) Vendors who manage, store, or process sensitive data can directly impact these three pillars of security, making a formal risk assessment indispensable.
It is important to note that third-party risk assessment is not merely about enforcing the same security configurations across the vendor’s environment. For instance, requiring a vendor to implement identical endpoint protection as the organization does might seem protective, but it is not the primary goal of risk assessment. Such enforcement addresses a specific control mechanism but does not provide a holistic evaluation of overall risk exposure. Endpoint protection is merely one layer within a broader security framework and does not fully account for operational, procedural, or human factors that could introduce vulnerabilities. Therefore, enforcing identical endpoint protection, while potentially beneficial, is insufficient as the main purpose of a risk assessment.
Similarly, vendor risk assessment is not confined solely to evaluating physical access controls or the vendor’s ability to prevent unauthorized personnel from entering facilities. While physical security is an essential consideration—especially for vendors handling sensitive hardware or storage—it represents only one dimension of potential risk. Vendor risk assessments examine a much broader spectrum of threats, including cyber security posture, regulatory compliance adherence, financial stability, operational resilience, disaster recovery preparedness, and reputational risk. Limiting the assessment to physical security would provide a narrow and incomplete understanding of the risks introduced by the vendor relationship.
A key aspect of vendor risk assessment is the evaluation of policies, procedures, and controls that the vendor has implemented to safeguard data and maintain service integrity. This can include technical controls such as network segmentation, encryption, access management, and logging, as well as administrative controls like employee training, incident response plans, and compliance audits. While mandating that vendors follow internal network segmentation rules can enhance security, this is an example of a specific control rather than the overarching objective of the assessment. The risk assessment itself is about understanding the level of exposure and determining whether existing controls are adequate or whether additional mitigation strategies are require( D )
The assessment process typically begins with identifying and categorizing vendors based on the level of access they have to sensitive systems and data, as well as the criticality of the services they provide. Vendors that handle personally identifiable information, financial data, or proprietary intellectual property are usually classified as high-risk due to the potential impact of a breach or operational failure. Once vendors are classified, the organization evaluates the specific threats associated with each vendor. This includes assessing the vendor’s security posture, compliance with applicable regulations and industry standards, history of security incidents, financial stability, and overall operational reliability.
To quantify and manage these risks, organizations often use frameworks and scoring methodologies. Risk scores may be assigned based on a combination of likelihood and impact, helping stakeholders prioritize which vendors require closer oversight, additional controls, or contractual safeguards. High-risk vendors might be required to undergo regular audits, implement additional security measures, participate in security awareness training, or submit detailed compliance reports. Low-risk vendors, on the other hand, may be subject to periodic reviews or standard contractual obligations. This structured approach ensures that resources are focused where they are most needed, allowing the organization to maintain operational efficiency while minimizing exposure to third-party risks.
Vendor risk assessments are also essential for regulatory compliance and legal accountability. Many industries, including finance, healthcare, and government services, have strict regulations governing data protection, privacy, and operational security. Regulatory bodies may require organizations to demonstrate due diligence in evaluating and managing vendor risk. Failing to perform a thorough risk assessment can result in penalties, reputational damage, and potential legal liability. By documenting risk assessment activities, organizations can provide evidence that they have taken reasonable measures to mitigate risks associated with third-party relationships, thus satisfying both internal governance policies and external regulatory requirements.
The process also plays a crucial role in incident response planning. Understanding the potential risks associated with each vendor allows an organization to develop contingency strategies in case of a security breach, operational failure, or compliance lapse originating from the vendor’s environment. These strategies may include predefined escalation procedures, alternative service arrangements, contractual penalties, and rapid data recovery protocols. In effect, vendor risk assessments enable proactive planning, reducing the likelihood that a vendor-related incident will escalate into a severe organizational crisis.
From a strategic perspective, vendor risk management is integral to enterprise risk management (ERM). ERM frameworks emphasize the identification, assessment, and mitigation of risks across the entire enterprise ecosystem, including internal operations, external partnerships, and supply chains. Vendor risk assessments provide crucial insights into potential vulnerabilities and dependencies outside the organization’s immediate control. By integrating these insights into the broader risk management strategy, organizations can make informed decisions about vendor selection, contract negotiations, and operational continuity planning.
Mitigation strategies derived from vendor risk assessments can take various forms. Contractual obligations are often used to formalize security requirements, compliance expectations, and reporting responsibilities. Monitoring and auditing programs provide ongoing oversight of the vendor’s adherence to policies and controls. Organizations may also require vendors to implement additional technical or administrative measures, such as multi-factor authentication, encryption of sensitive data, network segmentation, regular vulnerability scanning, or employee security training programs. These mitigation strategies ensure that risk is not merely identified but actively managed and reduced to acceptable levels.
Finally, a robust vendor risk assessment process fosters a culture of accountability, transparency, and continuous improvement. It encourages both the organization and its vendors to maintain high standards of operational security, compliance, and risk awareness. It also provides a mechanism for regular review and adaptation, allowing the organization to respond effectively to evolving threats, regulatory changes, and technological advancements.
Question 2
An organisation is designing a multi-cloud architecture and wishes to adopt a Zero Trust model. Which of the following design principles is most aligned with the Zero Trust concept?
( A ) Ensure all internal network traffic is implicitly trusted because it originates inside the corporate perimeter
( B ) Require micro-segmentation, continuous identity verification, and least-privilege access across all environments
( C ) Rely solely on perimeter firewalls to defend the network and ignore host-based controls
( D ) Permit administrative users full network access from any device without additional checks
Answer: B
Explanation:
The Zero Trust security model represents a fundamental shift in how organizations approach cybersecurity. Traditionally, many networks operated under the assumption that users and devices inside the corporate perimeter could be trusted, while threats primarily came from external sources. This model relied heavily on perimeter-based defenses such as firewalls and intrusion detection systems. However, with the rise of cloud computing, remote work, mobile devices, and sophisticated cyber threats, this perimeter-focused approach has become increasingly insufficient. Zero Trust rejects the notion of implicit trust based solely on network location or device ownership, emphasizing instead that all access requests must be rigorously verified regardless of origin. The principle at the heart of Zero Trust is that no user, device, or network segment should be automatically trusted, and every interaction must be authenticated and authorized continuously.
A core component of Zero Trust is micro-segmentation. This involves dividing the network into smaller, isolated zones, each with its own security controls and access policies. By creating these secure zones, organizations can limit lateral movement by attackers. If a threat successfully penetrates one segment, micro-segmentation prevents it from easily spreading to other parts of the network. This approach contrasts with traditional flat networks where a single breach could potentially compromise an entire infrastructure. Micro-segmentation requires careful planning of network architecture, including the implementation of robust firewall rules, access controls, and monitoring within each segment to enforce security boundaries effectively.
Another essential aspect of Zero Trust is continuous identity verification. This extends beyond the initial login or authentication step, requiring ongoing validation of both users and devices before granting access to sensitive resources. Continuous identity verification often involves multi-factor authentication, behavioral analytics, device health checks, and risk-based adaptive access controls. The system continuously evaluates whether the identity or device remains trustworthy based on context, activity patterns, and security posture. This dynamic verification ensures that compromised credentials, unauthorized devices, or suspicious behavior can be detected and mitigated in real time, significantly reducing the risk of unauthorized access.
Least-privilege access is also central to the Zero Trust model. Under this principle, users and devices are granted only the minimum access necessary to perform their required tasks. By limiting permissions, organizations reduce the potential impact of compromised accounts or insider threats. Implementing least-privilege access involves careful role-based access control, frequent review of access rights, and adjustments based on changes in job functions or organizational requirements. It ensures that even if an account is compromised, the potential damage an attacker can cause is restricted to only what is strictly necessary for that account’s operations.
Examining the options in the context of Zero Trust, option A, which assumes trust based on location or prior authentication, directly conflicts with Zero Trust principles. Implicitly trusting internal users or devices undermines the model’s foundational premise that all entities must be continuously verifie( D ) Option C focuses solely on perimeter security, which, while useful as one layer, does not address the comprehensive, continuous verification and access control required by Zero Trust. Option D violates both least-privilege principles and strong authentication practices by granting excessive access without sufficient verification. Therefore, the option aligned with the Zero Trust model emphasizes continuous verification, least-privilege access, and segmented, controlled environments.
Question 3
When performing a software bill of materials (SBOM) assessment in a supply-chain security context, which of the following outcomes is most appropriate?
( A ) Confirming that all software uses the exact version numbers of an internal standard
( B ) Identifying all libraries, components and dependencies used in an application to assess risk exposure
( C ) Removing all third-party software components from the application to reduce liability
( D ) Ensuring that only open-source licenses are used throughout the application
Answer: B
Explanation:
A software bill of materials, commonly abbreviated as SBOM, is a detailed inventory that enumerates all the components, libraries, frameworks, and dependencies that constitute a particular software application. In modern software development, applications are rarely built entirely from scratch. Instead, developers often rely on pre-existing open-source or third-party libraries, modules, and packages to accelerate development and provide standard functionality. While this practice improves efficiency and reduces development time, it introduces an element of risk because each external component could carry vulnerabilities, licensing obligations, or compatibility issues. An SBOM provides transparency by cataloging every element, including specific version numbers, in a structured format, enabling organizations to understand exactly what is present within their software stack.
The primary objective of an SBOM is to enhance software supply chain security and facilitate risk management. By having a complete map of all components, organizations can assess their exposure to known vulnerabilities. For instance, if a widely used open-source library has a recently disclosed security flaw, organizations with an SBOM can quickly determine whether that vulnerable version is present in their software and prioritize remediation efforts such as patching, upgrading, or implementing compensating controls. Without an SBOM, discovering affected components can be time-consuming, error-prone, and inefficient, potentially leaving systems exposed to exploitation.
An SBOM also enables more effective patch management. Software components may receive updates at different times, and organizations must ensure that each dependency is current and secure. By maintaining a clear record of component versions, security teams can track which pieces require updates and verify that patching activities are complete. In addition, SBOMs are essential for regulatory compliance in sectors where software security is tightly controlle( D ) Certain industries, including healthcare, finance, and critical infrastructure, may mandate that organizations have visibility into software dependencies to ensure that vulnerabilities are managed proactively. This makes SBOMs a vital tool not only for security but also for governance and compliance purposes.
It is important to clarify why other potential objectives often mentioned in exam contexts are not the primary purpose of an SBOM. Option A, which emphasizes strict version matching, can be helpful for compatibility and stability, but it is not the core goal of an SBOM. While knowing precise versions assists in mitigating risks associated with outdated libraries, the main focus is on identifying and understanding all components rather than enforcing rigid version control. Option C, which suggests removing unnecessary components to reduce functionality drastically, is unrealistic and counterproductive. The objective of an SBOM is to provide transparency and security insight, not to force developers to remove libraries that are integral to the application. Doing so could compromise software functionality and user experience. Option D, which focuses solely on tracking license types, is relevant for legal compliance and open-source management, but it does not address the central security and risk-related purpose of an SBOM. License tracking is a secondary benefit but not the primary reason organizations maintain these inventories.
Question 4
A security engineer proposes deploying an intrusion detection system (IDS) alongside an intrusion prevention system (IPS). Which combination of statements is correct regarding their roles and placement?
( A ) An IDS actively blocks malicious traffic; an IPS only alerts on suspicious activity
( B ) Place both IDS and IPS outside the firewall so they inspect unfiltered traffic only
( C ) An IDS monitors traffic and raises alerts; an IPS blocks or mitigates malicious traffic in real time
( D ) IDS and IPS serve identical functions and can be placed interchangeably without architectural consideration
Answer: C
Explanation:
An intrusion detection system, commonly abbreviated as IDS, is a critical component of network security architecture designed to monitor and analyze network traffic for signs of malicious activity or policy violations. Its primary function is to inspect packets, flows, and network behavior, comparing them against known signatures of attacks or anomalous patterns that may indicate potential security incidents. When such activity is detected, the IDS generates alerts to notify network administrators, security operations teams, or automated monitoring systems so that appropriate investigation or mitigation steps can be taken. IDS solutions can operate in various modes, including network-based, which monitors traffic across network segments, and host-based, which monitors activity on individual endpoints or servers. The essential characteristic of an IDS is its reactive nature: it observes, reports, and logs suspicious behavior but generally does not directly intervene to block or prevent the activity from occurring.
In contrast, an intrusion prevention system, or IPS, builds on the foundational monitoring capabilities of an IDS but adds the ability to actively respond to threats in real time. An IPS is typically deployed inline with network traffic, meaning that all traffic passes through the system, allowing it to take immediate action on malicious activity. This action can include dropping packets, terminating connections, resetting sessions, or modifying traffic to prevent attacks from reaching their intended targets. Essentially, an IPS not only detects potential threats but also prevents them from impacting network systems, thereby providing a more proactive layer of security. The IPS combines signature-based detection, anomaly detection, and sometimes behavior-based heuristics to identify threats while minimizing false positives, balancing the need for security with the uninterrupted flow of legitimate network traffi( C )
Question 5
During a forensic investigation, it is discovered that a file timestamp has been altered by malware. Which of the following integrity controls is most effective to detect such tampering proactively?
( A ) Periodic manual review of file explorer views
( B ) Using file hashing (e.g., SHA-256) combined with a baseline and frequent verification
( C ) Deploying an anti-spam filter on email traffic
( D ) Employing a VPN tunnel for remote access
Answer: B
Explanation:
File integrity monitoring is a fundamental component of a robust cybersecurity strategy, designed to ensure that critical files within a system remain in their intended, unaltered state. One common attack vector for malware involves tampering with files, including modifying timestamps, altering content, or replacing files with malicious versions. When such tampering occurs, it can compromise the integrity of the system, potentially allowing unauthorized access, data exfiltration, or execution of malicious code without immediate detection. Effective integrity controls are therefore essential to detect and respond to these alterations quickly, maintaining the reliability and trustworthiness of an organization’s digital assets.
A key approach to maintaining file integrity is establishing a baseline hash value for each critical file. Cryptographic hash functions, such as SHA-256, generate a unique fixed-length string that corresponds to the specific contents of a file. Any change in the file, even a single bit, will produce a drastically different hash value. This property allows security teams to detect unauthorized modifications precisely. By comparing the current hash of a file with its baseline hash, any discrepancy immediately signals that the file has been altere( D ) This process is the foundation of file integrity monitoring (FIM), providing an automated, reliable, and scalable method to track changes and ensure that only authorized modifications occur.
Once the baseline hash values are established, regular verification is crucial. Automated monitoring systems can be configured to continuously or periodically check the hash values of files against their baseline. This proactive approach ensures that any tampering, whether caused by malware, human error, or configuration changes, is quickly identifie( D ) Automated alerts can be generated when discrepancies are detected, enabling security teams to investigate the cause, assess potential impacts, and remediate any malicious activity. By implementing this continuous monitoring, organizations create a layer of defense that protects against undetected file modifications, which could otherwise compromise system integrity, disrupt operations, or facilitate further attacks.
Question 6
Which authentication mechanism provides the highest assurance of identity in a scenario involving remote privileged access for administrators?
( A ) Username and password only
( B ) Smart card plus PIN plus biometric verification
( C ) Single sign-on using reuse of the same credentials as general users
( D ) Token-based authentication without device or user verification
Answer: B
Explanation:
When managing remote privileged administrative access, ensuring that the authentication mechanism provides the highest possible level of identity assurance is critical. Privileged accounts, such as system administrators, network engineers, or database administrators, possess elevated rights that allow them to modify system configurations, access sensitive data, and perform actions that can significantly affect the security and operation of an organization’s IT infrastructure. Because of the potential impact of misuse or compromise, these accounts are high-value targets for attackers. If an attacker gains unauthorized access to a privileged account, the consequences can include data breaches, ransomware deployment, disruption of services, and even full compromise of enterprise systems. Therefore, implementing a robust authentication strategy is essential to mitigate risk and protect critical systems.
Multi-factor authentication (MFA) is a security control that requires users to present two or more independent credentials to verify their identity. These credentials are typically classified into three categories: something you know, something you have, and something you are. Something you know usually refers to a secret, such as a password or personal identification number (PIN). Something you have refers to a physical device that can generate or store authentication credentials, such as a smart card, hardware token, or mobile authenticator app. Something you are refers to inherent biometric characteristics, such as fingerprints, facial recognition, iris scans, or voice patterns. By combining multiple factors from different categories, MFA significantly increases the difficulty for attackers to impersonate a legitimate user. Compromising one factor alone is insufficient for gaining access, thus providing a layered defense against unauthorized access.
The specific combination of a smart card, PIN, and biometric verification provides one of the strongest forms of MF( A ) The smart card serves as a tangible device that stores cryptographic keys or certificates required for authentication, satisfying the “something you have” factor. The PIN acts as a secret knowledge factor, ensuring that even if the smart card is lost or stolen, possession alone is insufficient for authentication. Finally, biometric verification, such as a fingerprint scan or facial recognition, adds a third layer of security that validates the user’s physical identity, making impersonation much more difficult. This layered approach dramatically increases the assurance that the individual attempting to access a system is indeed the authorized account holder, especially when compared to weaker or single-factor methods.
In contrast, relying on a username and password alone is insufficient for securing privileged access. Passwords can be guessed, phished, intercepted, or cracked using brute-force attacks. Administrative accounts are often subject to targeted attacks, and using only a password provides minimal protection against sophisticated adversaries. Similarly, using single sign-on solutions without additional verification may increase convenience but can also expand the attack surface, as a compromise of one system could provide access to multiple accounts. Token-only solutions without an accompanying PIN or biometric verification are also inadequate because possession of the token alone may be sufficient for unauthorized access if the token is lost, stolen, or duplicate( D ) These weaker approaches fail to meet the high assurance requirements necessary for privileged administrative access.
Question 7
What is the primary advantage of implementing micro-segmentation in a cloud-based infrastructure from a security perspective?
( A ) It reduces the number of firewalls needed in the network
( B ) It confines threat spread by restricting lateral movement within workload clusters
( C ) It eliminates the need for encryption of data in transit
( D ) It allows any workload to communicate freely without access controls
Answer: B
Explanation:
Micro-segmentation is an advanced network security technique that involves dividing a larger network, particularly in cloud, virtualized, or data center environments, into smaller, isolated segments, each with its own set of security controls and policies. Unlike traditional network segmentation, which typically separates networks into broad zones such as internal and external networks or DMZs, micro-segmentation focuses on creating very fine-grained divisions within the network. Each segment can be a single virtual machine, an application, or a group of workloads, depending on the organizational requirements and risk profile. This granular approach allows security teams to implement strict access controls and inspection policies tailored to the specific needs of each segment, thereby significantly enhancing the overall security posture of the environment.
The primary security benefit of micro-segmentation is its ability to contain threats and limit lateral movement. In the event that an attacker or malware gains access to one part of the network, micro-segmentation ensures that the threat cannot easily spread to other segments. Traditional flat networks or broad segmentation strategies often allow attackers to move laterally once they have compromised a single device, which can lead to widespread system compromise. Micro-segmentation, by contrast, creates logical boundaries between workloads, so that each segment operates as a controlled environment. Security controls such as firewalls, intrusion detection systems, or access control lists can be applied at the segment level to monitor and restrict traffic between zones. This containment approach is particularly effective in mitigating risks associated with advanced persistent threats, ransomware, and insider attacks, where unauthorized lateral movement is a key objective of the attacker.
Micro-segmentation also provides the ability to enforce more granular security policies based on the principle of least privilege. By restricting communication between segments only to the traffic necessary for specific applications or business functions, organizations reduce the attack surface and minimize the risk of accidental exposure of sensitive dat( A ) For instance, if a web server segment only needs to communicate with an application server segment on specific ports, all other traffic can be blocked, reducing the avenues available for an attacker. This selective access control not only strengthens security but also supports compliance requirements for regulatory frameworks such as PCI DSS, HIPAA, and ISO 27001, which emphasize protecting sensitive data and enforcing strict access controls.
It is important to understand that micro-segmentation does not replace other security mechanisms, but rather complements them. Encryption, endpoint protection, intrusion detection, and other network security controls remain critical. Micro-segmentation focuses on reducing the impact of a breach by limiting the spread of threats, not on replacing encryption or other preventive measures. Implementing micro-segmentation may also require additional network infrastructure, software-defined networking capabilities, or firewall rules, as granular controls need to be enforced at multiple points within the network. While this can increase complexity, the security benefits outweigh the operational overhead, especially in high-risk environments or where sensitive data is stored and processe( D )
From an architectural perspective, micro-segmentation represents a shift towards zero trust principles. In a zero trust model, no device, user, or segment is implicitly trusted, even if it is within the corporate perimeter. Micro-segmentation operationalizes this principle by ensuring that trust is not assumed between network segments. Each interaction between segments is explicitly controlled and monitored, reinforcing security boundaries and enabling organizations to quickly detect and respond to suspicious activity. Security teams can implement logging and monitoring within each segment to track traffic patterns and identify anomalies, further enhancing visibility and situational awareness.
Question 8
A company is drafting its incident response plan. Which of the following steps should occur first when a security incident is detected?
( A ) Eradication of malicious artifacts across all systems
( B ) Containment of the incident to prevent further spread
( C ) Identification and classification of the incident type and scope
( D ) Restoration of systems from backups
Answer: C
Explanation:
When a security incident is detected, the first step is to identify and classify the incident—determine what type of incident it is (malware, intrusion, data breach), its scope (which systems, how many, what data is affected) and its severity. This classification informs subsequent decisions about containment, eradication, and recovery. Option B (containment) comes next once the incident is identifie( D ) Option A (eradication) follows containment. Option D (restoration) is later in the recovery phase. Therefore C is correct. Incident response procedures and operations form a critical part of the operations and incident response domain. +1
Question 9
Which of the following describes the concept of “leastprivilege” access and its significance within an enterprise security programme?
( A ) Granting users full access so they don’t need to request permissions later
( B ) Restricting users to the minimum permissions necessary to perform their tasks, thereby reducing risk of misuse or compromise
( C ) Allowing temporary full access to all users to simplify configuration
( D ) Keeping one generic administrative account and using it for all users
Answer: B
Explanation:
Least-privilege access means that a user, system or process is granted only those permissions essential to perform its legitimate tasks—and nothing more. This reduces misuse, reduces the attack surface, and limits potential damage if the account is compromise( D ) Option A is opposite of least-privilege. Option C increases risk. Option D increases risk via shared, over-privileged accounts. Therefore B is correct. This principle is key in access control, security architecture, and operations.
Question 10
In a risk assessment process, an organisation calculates expected annual loss as follows: asset value = $500,000, threat likelihood = 4 % per year, vulnerability factor = 0.5. What is the annualised loss expectancy (ALE) for that asset?
( A ) $10,000
( B ) $20,000
( C ) $100,000
( D ) $40,000
Answer: B
Explanation:
The expected annual loss (ALE) is calculated as asset value × threat likelihood × vulnerability factor. Here: $500,000 × 0.04 × 0.5 = $500,000 × 0.02 = $10,000 (Note: 0.04 × 0.5 = 0.02). Wait: Let’s compute carefully: 4% = 0.04. 0.04 × 0.5 = 0.02. 0.02 × $500,000 = $10,000. So the answer is $10,000. That corresponds to option A (not B). Therefore the correct choice is ( A ) The formula reflects risk management concepts. This kind of calculation is part of governance, risk and compliance.
Question 11
Which technique is most appropriate to ensure the confidentiality of data at rest in an enterprise mobile device environment?
( A ) Disabling the screen saver
( B ) Encrypting the storage volume and using strong keys
( C ) Using only weak passwords so users don’t forget them
( D ) Leaving devices unlocked to facilitate rapid access
Answer: B
Explanation:
To protect data at rest—meaning data stored on a device or volume—that data should be rendered unreadable to unauthorized individuals. Using full-disk or volume encryption combined with strong cryptographic keys is the best approach to ensure confidentiality. Option A has no impact on confidentiality of stored dat( A ) Option C is counter-productive (weak passwords are a risk). Option D exposes data to risk. Therefore B is correct. Encryption of data at rest is a key control in both architecture and operations domains.
Question 12
What is the primary distinction between SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) during software development?
( A ) SAST tests running applications only; DAST analyses source code only
( B ) SAST evaluates the application in operating state; DAST reviews the architecture design only
( C ) SAST reviews source code, binaries or internal logic before runtime; DAST tests the running application during execution
( D ) SAST is done after deployment; DAST is done before writing any code
Answer: C
Explanation:
Static Application Security Testing (SAST) analyses source code, compiled binaries or internal logic without executing the program—it’s done early in the software development life cycle (SDLC). Dynamic Application Security Testing (DAST) operates on the running application during execution, simulating external attacks, providing insight into runtime behaviour, interactions, inputs and outputs. Option C correctly distinguishes them. Option A gets it backwar( D ) Option B is imprecise. Option D mis‐positions the timing. Recognising SAST vs DAST is part of secure software lifecycle and engineering domain content.
Question 13
Which of the following best illustrates “defence-in-depth” when architecting a secure environment?
( A ) Relying solely on perimeter firewall to protect the network
( B ) Using multiple layers of controls such as firewalls, endpoint protection, network segmentation, access controls, monitoring and encryption
( C ) Disabling logging to avoid overwhelming the security team
( D ) Giving users direct unrestricted access to the internet
Answer: B
Explanation:
Defence-in-depth is a strategy that uses multiple layers of security controls across different layers—network, host, application, data—to create redundancy and mitigate single points of failure. It aims to provide overlapping protection so that if one layer fails, others remain. Option B describes this layering. Option A is insufficient (single control). Option C is counter-productive (logs are essential). Option D opens major risk. Therefore B is correct. This concept is central to security architecture and design.
Question 14
An organisation uses virtualization and “gold images” for endpoint provisioning. Which practice helps ensure these images remain secure over time?
( A ) Never updating the gold image to avoid disruption
( B ) Periodic patching of the gold image, creating versioned images and deploying them to new endpoints
( C ) Allowing end-users to install any software on endpoints created from the gold image
( D ) Removing endpoint logging because it slows down performance
Answer: B
Explanation:
When using gold images (standardised virtual machine or endpoint images), it is imperative to maintain them securely: keep them current with patches, create versioned images so you can roll forward, test the new image before deployment, and ensure deployment replaces outdated endpoints. Option B encapsulates these best practices. Option A (never updating) leaves the environment vulnerable. Option C (end-users install any software) undermines standardisation and control. Option D (remove logging) reduces visibility and weakens security. Therefore B is correct. This falls under secure build and lifecycle management in the engineering domain.
Question 15
Which of the following is least likely to be considered a function of a Security Information and Event Management (SIEM) system in operational security?
( A ) Collecting logs from disparate systems and normalising them
( B ) Correlating events to identify potentially malicious patterns
( C ) Providing dashboards and reports for compliance and threat hunting
( D ) Automatically deploying firmware updates to all endpoints
Answer: D
Explanation:
A SIEM system is designed to aggregate security logs from various sources, normalise data, correlate events to spot suspicious patterns, provide dashboards and reporting, support threat hunting and help compliance. Option D (automatically deploying firmware updates to all endpoints) is not a typical function of a SIEM – that activity is handled by patch management or endpoint management systems. Therefore D is the correct answer for “least likely” to be part of SIEM functionality. This aligns with security operations domain.
Question 16
Which cryptographic method ensures non-repudiation of a transaction?
( A ) Symmetric key encryption without authentication
( B ) Hashing data only
( C ) Digital signature using asymmetric keys and certificate infrastructure
( D ) Compressing the data
Answer: C
Explanation:
Non-repudiation means the ability to prove that a specific entity sent, signed or authorized something, and that they cannot deny that fact later. Digital signatures using asymmetric keys and a certificate infrastructure (Public Key Infrastructure, PKI) ensure that only the private key holder could have created the signature, and that any changes in the data invalidate it. This provides authentication, integrity and non-repudiation. Option A lacks authentication and is symmetri( C ) Option B (hashing) provides integrity but not identity binding. Option D (compression) is irrelevant. Therefore C is correct. Cryptography and PKI are key topics in security engineering.
Question 17
In the context of change management, what is the main security concern if an organisation fails to document and approve configuration changes to systems?
( A ) Systems run latest features faster
( B ) Uncontrolled changes may introduce vulnerabilities, misconfigurations or inconsistency across systems
( C ) It allows auditors to skip their reviews
( D ) End-users gain flexibility
Answer: B
Explanation:
Lack of documented and approved configuration changes presents a significant security risk: uncontrolled changes may introduce misconfigurations, inadvertently disable security controls, create inconsistencies (leading to gaps in patching or monitoring), and increase the chance of vulnerabilities. Option A is not a security advantage. Option C (skip auditors) is irrelevant. Option D (end-user flexibility) may reduce control and increase risk. Therefore B is correct. This is part of governance, risk and compliance and operations controls.
Question 18
What is the advantage of deploying a honeypot as part of an intrusion detection strategy?
( A ) It blocks all legitimate user traffic
( B ) It distracts attackers by appearing as a legitimate target, thereby enabling monitoring of attacker behaviour without risking real assets
( C ) It encrypts user traffic for confidentiality
( D ) It ensures no malware ever reaches the production systems
Answer: B
Explanation:
A honeypot is a decoy system that is intentionally exposed to attract attackers. By doing so, it distracts them and allows defenders to observe attacker techniques, gather intelligence, and detect threats early without risking critical production systems. Option A is incorrect (honeypots don’t block legitimate traffic en masse). Option C (encrypting user traffic) is unrelate( D ) Option D (ensuring no malware ever reaches production) is unrealistic—honeypots help detection but cannot guarantee that. Therefore B is correct. Honeypots and deception technologies are part of advanced operations and threat-hunting strategy.
Question 19
An enterprise implements multi-factor authentication (MFA) using a mobile push notification and fingerprint scan. Which principle of identity and access management (IAM) does this strengthen?
( A ) Single sign-on (SSO)
( B ) Federation without verification
( C ) Assurance of identity and multi-factor verification
( D ) Shared generic accounts
Answer: C
Explanation:
The use of multi-factor authentication (MFA) with both something the user has (mobile device), and something the user is (fingerprint) strengthens the assurance of identity—it confirms the identity of the user with multiple independent factors. This makes impersonation much harder. Option A (SSO) refers to single sign-on—related but not the core strength here. Option B (federation without verification) is weak and opposite of the strengthening effect. Option D (shared generic accounts) is insecure. Therefore C is correct. IAM principles like assurance, strong authentication, federation and least-privilege are part of the security engineering domain.
Question 20
Which of the following is an example of preventive security control versus detective or corrective?
( A ) Log monitoring to identify anomalies
( B ) Deploying a web application firewall (WAF) to block injection attacks
( C ) Conducting post-incident reviews after a breach
( D ) Performing data backup and restoration after ransomware
Answer: B
Explanation:
Preventive controls are designed to stop an incident or threat before it occurs. Deploying a web application firewall (WAF) that blocks injection attacks is a preventive measure—it aims to stop malicious traffic from entering the application. Option A (log monitoring) is detective (identifying occurrences). Option C (post-incident review) is corrective/lessons-learne( D ) Option D (backup/restoration) is corrective or recovery. Thus B is correct. Knowing the difference between preventive, detective and corrective controls is important in governance and architecture.