Visit here for our full ISC SSCP exam dumps and practice test questions.
QUESTION 161
Which network security technique divides a network into smaller, isolated segments to limit lateral movement and reduce the impact of security breaches?
A) Network Address Translation
B) Subnet Masking
C) Network Segmentation
D) Port Forwarding
Answer:
C
Explanation:
Network segmentation is the correct answer because it refers to the practice of dividing a larger network into smaller, controlled sections. SSCP candidates must understand network segmentation because it is a critical defensive strategy that restricts lateral movement, reduces the spread of malware, minimizes insider threats, and enhances overall network visibility and control.
Segmentation works by separating systems based on function, sensitivity, or user roles. For example, sensitive systems such as financial servers, HR databases, and domain controllers can be placed in isolated subnets with limited access. Likewise, guest networks can be separated from internal corporate networks to prevent visitors or unauthorized users from accessing sensitive resources. This segmentation reduces the attack surface and limits the damage potential if an attacker compromises a single device.
Network segmentation also helps enforce the principle of least privilege at a network level. Instead of allowing all devices to communicate freely, segmentation ensures that only authorized systems can interact. Technologies such as firewalls, VLANs, ACLs, microsegmentation, and zero trust network architectures are commonly used to implement segmentation.
Comparing segmentation with the incorrect responses highlights why option C is correct. Network Address Translation hides internal IP addresses but does not isolate traffic effectively enough to prevent lateral movement. Subnet masking divides IP ranges but does not enforce security controls between segments—communication is still unrestricted unless firewalls or ACLs are applied. Port forwarding directs traffic from an external source to a particular internal service, which has nothing to do with isolating internal sections of a network. Only segmentation restricts communication among network groups and enforces internal boundaries.
Segmentation improves incident containment. If malware infects one user’s workstation, proper segmentation prevents the malware from spreading to servers or critical systems without proper authorization. This is especially important for ransomware, which often spreads laterally within flat networks. Segmentation can also reduce the blast radius of insider threats. A compromised user account with access to only one network zone cannot compromise all systems within the organization.
Many regulatory frameworks require segmentation. For example, PCI DSS requires separating cardholder data environments from the rest of the network. HIPAA requires isolating medical systems from general office networks. Segmentation reduces risk by ensuring that breaches in less sensitive areas do not compromise highly sensitive ones.
Segmentation also enhances monitoring capabilities. Network traffic restricted to dedicated paths becomes easier to observe and analyze. Security teams can identify abnormal communication patterns that might indicate intrusions or misconfigurations. Additionally, segmentation supports performance optimization by preventing broadcast storms and reducing congestion.
To implement effective segmentation, organizations must classify assets, define policies, configure network devices appropriately, and continuously monitor traffic flows. Microsegmentation takes this approach further by using software-defined networking to isolate workloads at a granular level in cloud and virtualized environments.
Because segmentation divides networks into controlled, isolated segments, preventing unauthorized movement and reducing security risks, answer C is correct.
QUESTION 162
Which type of malware disguises itself as legitimate software to trick users into installing it, but secretly performs malicious activities once executed?
A) Worm
B) Trojan
C) Rootkit
D) Ransomware
Answer:
B
Explanation:
A Trojan is the correct answer because it masquerades as legitimate or desirable software in order to deceive users into installing it. SSCP candidates must understand Trojans because they represent one of the most common attack vectors, exploiting user trust instead of technical vulnerabilities. Once installed, a Trojan can perform a wide range of malicious actions while appearing benign.
A Trojan does not self-replicate like a worm. Instead, it relies on social engineering or deception to convince the victim to execute it voluntarily. Attackers often disguise Trojans as software updates, media files, utilities, security tools, or cracked applications. The user believes they are installing something useful, but the hidden malicious component activates in the background.
Comparing Trojans to other malware types clarifies why option B is correct. Worms propagate automatically across networks without user interaction. Rootkits modify system components to hide malicious activity but are not necessarily disguised software at the point of installation. Ransomware encrypts data and demands payment but does not specifically focus on deception through legitimate appearance. Only Trojans rely heavily on disguise to gain entry.
Once installed, a Trojan may open a backdoor, steal passwords, capture keystrokes, download additional malware, participate in botnets, exfiltrate sensitive data, or sabotage systems. Attackers often use Trojans as the initial stage of a larger intrusion. After gaining access, they install secondary payloads such as ransomware or remote access tools.
Trojans are commonly delivered through phishing emails, malicious websites, compromised software repositories, and fake downloads. Attackers may use misleading advertisements or search engine poisoning to lure victims into downloading Trojanized software. Users who lack cybersecurity awareness are particularly vulnerable to such tactics.
Organizations can defend against Trojans through a combination of user training, endpoint protection tools, restricted installation privileges, application allowlisting, and real-time threat detection. Email filtering, web filtering, and sandboxing also help block Trojan-laden attachments or executables. Additionally, monitoring network traffic for unusual outbound connections may reveal Trojan activity attempting to communicate with command-and-control servers.
SSCP candidates must recognize that Trojans exploit human behavior rather than pure system vulnerabilities. Therefore, organizational security requires not only technical defenses but also effective user awareness programs to identify deceptive downloads and suspicious links.
Because a Trojan specifically disguises itself as legitimate software to trick users into installing malicious code, answer B is correct.
QUESTION 163
Which metric represents the average time it takes to repair a failed component or system and restore it to operational status?
A) MTTF
B) MTTR
C) MTBF
D) RTO
Answer:
B
Explanation:
MTTR, or Mean Time to Repair, is the correct answer because it measures the average time required to diagnose, fix, and restore a failed system or component. SSCP candidates must understand MTTR because it is a critical performance indicator that reflects the efficiency of maintenance processes, the reliability of systems, and the readiness of support teams.
MTTR applies to hardware failures, software malfunctions, network outages, and other operational disruptions. It includes the time spent detecting the issue, isolating the cause, repairing or replacing the faulty part, testing the solution, and returning the system to service. A lower MTTR indicates faster recovery and greater system resilience.
Comparing MTTR to other metrics clarifies why option B is correct. MTTF, or Mean Time to Failure, measures how long a non-repairable system operates before failing. MTBF, or Mean Time Between Failures, applies to repairable systems but measures the time between one failure and the next, not the repair time itself. RTO, or Recovery Time Objective, describes how quickly a system must be restored after a disaster but is a planning target rather than a measured operational metric.
MTTR helps organizations evaluate their ability to respond to incidents. For example, if a server fails and takes four hours to restore, that contributes to the overall MTTR value. Organizations track trends over time to improve operational efficiency. If MTTR is too high, root cause analysis, improved documentation, spare parts availability, better training, or stronger monitoring may be required.
MTTR is important for maintaining high availability environments. Service-level agreements often include MTTR expectations to ensure reliable service delivery. A low MTTR reduces downtime, improves user satisfaction, and minimizes business impact during failures. In mission-critical industries such as healthcare, aviation, finance, and energy, MTTR has a direct effect on safety, customer experience, and regulatory compliance.
Automated monitoring tools can detect system failures more quickly, reducing time spent diagnosing issues. Redundancy and fault-tolerant designs can also minimize the impact of failures, but MTTR still applies to repairing underlying components even when failover systems take over temporarily.
Because MTTR specifically measures the time required to repair failures and restore operations, answer B is correct.
QUESTION 164
Which form of access control bases permissions on an individual’s job responsibilities, grouping users into predefined roles to simplify administration?
A) DAC
B) RBAC
C) MAC
D) ABAC
Answer:
B
Explanation:
RBAC, or Role-Based Access Control, is the correct answer because it assigns permissions to roles rather than individuals. SSCP candidates must understand RBAC because it simplifies administration, reduces errors, supports least privilege, and aligns access rights with organizational structure.
In RBAC, administrators define roles such as HR, finance, developer, network administrator, or customer support. Each role has a set of permissions that reflect the tasks required for that job. Users are then assigned to roles rather than given individual permissions. This ensures consistency and makes it easy to onboard or modify users without manually adjusting their privileges.
Comparing RBAC to other access control models clarifies its advantages. DAC, or Discretionary Access Control, allows data owners to decide who can access their resources, which can lead to inconsistent permission assignments. MAC, or Mandatory Access Control, uses labels and classifications such as confidential or secret and enforces rigid access policies, typically in military or government settings. ABAC, or Attribute-Based Access Control, uses attributes such as device type, location, or time of day to make dynamic decisions.
RBAC is widely used in enterprise environments, particularly with systems like Active Directory. It supports policy enforcement by grouping permissions in a structured, predictable way. RBAC also simplifies audits because roles can be reviewed as a whole instead of evaluating each user individually.
Organizations benefit from RBAC by improving security hygiene and reducing privilege creep. If a user changes departments, administrators simply change the user’s role assignment. This prevents users from accumulating unnecessary permissions over time. RBAC also facilitates regulatory compliance because auditors can easily verify that access permissions align with job functions.
Because RBAC groups access permissions based on job responsibilities and assigns users to those predefined roles, answer B is correct.
QUESTION 165
Which wireless security mechanism prevents unauthorized devices from connecting by requiring authentication through a centralized server such as RADIUS?
A) WPA2-Personal
B) WPA3-Personal
C) WPA2-Enterprise
D) WPS
Answer:
C
Explanation:
WPA2-Enterprise is the correct answer because it requires authentication through a centralized server, typically RADIUS, using protocols such as 802.1X. SSCP candidates must understand WPA2-Enterprise because it is the standard authentication method for secure enterprise Wi-Fi deployments, offering stronger identity verification than personal-mode networks.
Unlike WPA2-Personal, which relies on a shared passphrase used by all devices, WPA2-Enterprise assigns unique credentials to each user or device. These may include usernames and passwords, certificates, or smart card-based authentication. The authentication request is forwarded to a RADIUS server, which validates the credentials and grants or denies access accordingly.
WPA3-Personal strengthens passphrase-based encryption but does not use centralized authentication. WPS is a simplified configuration method that has known security weaknesses and does not provide enterprise-grade authentication.
WPA2-Enterprise enhances security by eliminating shared passwords, which can be easily leaked or reused. Each user has a unique identity, making it easier to revoke access, monitor activity, and enforce individual accountability. It also enables certificate-based authentication, which is far more secure than passwords and resistant to phishing and brute-force attacks.
Because WPA2-Enterprise requires centralized authentication through RADIUS and provides strong identity controls, answer C is correct.
QUESTION 166
Which attack involves intercepting communication between two parties and secretly altering the messages without their knowledge?
A) Replay Attack
B) Spoofing Attack
C) Man-in-the-Middle Attack
D) Dictionary Attack
Answer:
C
Explanation:
A man-in-the-middle (MITM) attack is the correct answer because it involves intercepting communication between two parties and manipulating the data without either party realizing the interference. SSCP candidates must understand MITM attacks because they are a common threat in unencrypted or poorly protected communication channels, enabling attackers to steal credentials, modify transactions, or inject malicious content.
During a MITM attack, the attacker positions themselves between the sender and receiver. They may intercept messages, alter them, or relay them with modifications. The legitimate users believe they are communicating directly with each other, but the attacker controls the exchange. MITM attacks can occur on insecure Wi-Fi networks, compromised routers, DNS spoofing scenarios, or via malware on one of the communicating devices.
Replay attacks simply resend captured messages without altering them. Spoofing impersonates a device or user but does not involve continuous interception. Dictionary attacks attempt to guess passwords from a predefined list. Only MITM attacks involve ongoing interception and modification of messages.
MITM attacks can be mitigated through strong encryption, certificate pinning, HTTPS enforcement, secure DNS, VPNs, and mutual authentication. Organizations must ensure users do not connect to insecure networks and that systems verify the identity of communication partners.
Because MITM attacks specifically intercept and alter communication between parties, answer C is correct.
QUESTION 167
Which security concept ensures that a system continues to operate even in the presence of failures by providing duplicate components or alternate processing paths?
A) Integrity
B) Hardening
C) Redundancy
D) Authentication
Answer:
C
Explanation:
Redundancy is the correct answer because it ensures continued system operation by using backup components or alternate pathways. SSCP candidates must understand redundancy because it is essential for building highly available systems capable of withstanding failures without disrupting operations.
Redundancy appears in many forms: redundant power supplies, failover servers, RAID disk arrays, multiple network links, and geographically distributed data centers. When one component fails, the redundant component seamlessly takes over, preventing downtime.
Hardening focuses on strengthening security configurations. Integrity protects data accuracy. Authentication verifies user identity. Only redundancy provides continuous functionality despite hardware or software failure.
Redundancy reduces the impact of outages, supports business continuity, and improves system resilience. It is especially critical in high-availability environments such as financial systems, medical systems, cloud infrastructures, and emergency services.
Because redundancy ensures systems remain operational by using duplicate components, answer C is correct.
QUESTION 168
Which type of test evaluates how well personnel understand and follow security policies, procedures, and incident response processes?
A) Functional Testing
B) Social Engineering Testing
C) Security Awareness Testing
D) Penetration Testing
Answer:
C
Explanation:
Security awareness testing is the correct answer because it evaluates whether employees understand and correctly apply the organization’s security policies and procedures. SSCP candidates must understand awareness testing because human error remains one of the leading causes of security incidents.
Awareness testing may include quizzes, phishing simulations, scenario-based exercises, or interviews. These tests validate whether employees can recognize suspicious emails, avoid insecure behavior, follow incident reporting procedures, and comply with organizational policies.
Social engineering testing focuses specifically on tricking users into violating procedures. Penetration testing evaluates technical vulnerabilities. Functional testing verifies software or system behavior. Only security awareness testing measures employee knowledge and compliance.
Awareness testing helps organizations identify weaknesses in training programs and ensure that staff behavior aligns with security expectations. Regular testing reinforces good habits and reduces risk associated with phishing, malware installation, or mishandling sensitive information.
Because awareness testing specifically evaluates employee understanding of security practices, answer C is correct.
QUESTION 169
Which protocol enables secure remote command-line access by encrypting communication between the client and server?
A) Telnet
B) FTP
C) HTTP
D) SSH
Answer:
D
Explanation:
SSH, or Secure Shell, is the correct answer because it encrypts remote command-line communication, protecting credentials and session data from eavesdropping. SSCP candidates must understand SSH because it is a standard tool for securely managing servers, networking equipment, and remote systems.
Telnet transmits all data in plaintext and is insecure. FTP transfers files without encryption. HTTP handles web traffic but does not encrypt by default. SSH provides encrypted sessions, key-based authentication, and secure tunneling.
SSH protects against interception, tampering, and impersonation. Administrators use SSH for configuration tasks, log reviews, software installation, and automation via scripts. SSH keys provide stronger authentication than passwords, and tools like SCP and SFTP build secure file transfer on top of SSH.
Because SSH specifically provides encrypted remote command-line access, answer D is correct.
QUESTION 170
Which concept requires that sensitive operations, such as financial transactions or system configuration changes, must be approved or performed by more than one individual?
A) Job Rotation
B) Separation of Duties
C) Discretionary Access
D) Least Privilege
Answer:
B
Explanation:
Separation of duties is the correct answer because it requires distributing critical tasks among multiple individuals to prevent fraud, unauthorized changes, and abuse of power. SSCP candidates must understand separation of duties because it reduces insider threats, ensures oversight, and strengthens operational integrity.
For example, the person who requests a financial transaction should not be the same person who approves it. Similarly, the administrator who configures a system should not be the only one who reviews the changes. By dividing responsibilities, organizations prevent any single person from having excessive control over sensitive operations.
Least privilege restricts permissions but does not require multiple people. Job rotation reduces fraud through periodic reassignment but does not require cooperative approval. Discretionary access allows resource owners to grant permissions but does not address separation of roles.
Separation of duties is widely required in regulatory frameworks such as SOX, PCI DSS, and HIPAA. It enhances accountability, reduces fraud opportunities, and ensures checks-and-balances in sensitive processes.
Because separation of duties requires multiple individuals to complete critical tasks, answer B is correct.
QUESTION 171
Which security principle focuses on minimizing the amount of information disclosed in system messages, banners, and error outputs to avoid helping attackers gather reconnaissance data?
A) Fail-Safe Defaults
B) Least Privilege
C) Information Disclosure Control
D) Open Design
Answer:
C
Explanation:
Information disclosure control is the correct answer because it focuses on limiting the amount of sensitive or detailed information exposed by systems during normal operation or error conditions. SSCP candidates must understand this principle because attackers frequently rely on seemingly harmless details—such as software versions, detailed error messages, or configuration hints—to craft targeted attacks, identify vulnerabilities, or bypass controls.
When systems reveal too much information, they assist attackers in the reconnaissance phase. For example, a verbose error message that states “SQL syntax error near ‘SELECT * FROM users’” confirms that the application uses SQL on the back end, and it may even hint at the exact query. Similarly, banners that disclose the exact version of a web server, SSH daemon, or mail server make it easier for attackers to search for known vulnerabilities associated with that version.
Information disclosure control addresses this problem by ensuring that system outputs are carefully designed to provide enough information for legitimate users and administrators to troubleshoot issues, but not so much that attackers receive a roadmap to exploitation. Error messages should be generic for end users while detailed logs are stored securely for administrators. Service banners should be minimized or disabled to avoid advertising precise version information.
Comparing this principle to the incorrect options clarifies its uniqueness. Fail-safe defaults focus on denying access by default and granting only when explicitly allowed. Least privilege restricts permissions and access levels for users and processes. Open design argues that security should not depend on secrecy of the design or implementation. Only information disclosure control directly addresses limiting what systems reveal to external observers.
Practical implementations include disabling unnecessary service banners, configuring web servers to return generic error pages (such as “500 Internal Server Error”), and ensuring that stack traces or debug information are never shown to users. Applications should avoid echoing back raw input (especially from untrusted sources) in error responses, as this may expose internal logic or security mechanisms.
Information disclosure is also critical from a privacy perspective. Systems must avoid leaking personal or sensitive data through logs, debug modes, or misconfigured APIs. Even partial data, such as truncated IDs or masked credit card numbers, must be handled carefully to avoid inference attacks.
Because information disclosure control specifically aims to reduce the amount of useful information available to attackers while still supporting legitimate operations, and because no other option fits that focus precisely, answer C is correct.
QUESTION 172
Which security testing approach involves providing testers with full knowledge of the system’s internal architecture, source code, and configuration details before the assessment?
A) Black-Box Testing
B) Gray-Box Testing
C) White-Box Testing
D) Blind Testing
Answer:
C
Explanation:
White-box testing is the correct answer because it is a security and quality assurance approach in which testers have complete visibility into the system’s internal design, source code, architecture, and configuration. SSCP candidates must understand white-box testing because it allows for deep, comprehensive analysis of how security controls are implemented and where weaknesses may exist within the code and system logic.
In white-box testing, the objective is not to simulate an external attacker with no prior knowledge, but to thoroughly evaluate all aspects of the system with maximum insight. Testers can examine authentication flows, authorization checks, input validation routines, cryptographic implementations, error handling, and logging mechanisms at a granular level. This allows them to identify subtle vulnerabilities such as insecure logic, race conditions, poorly handled exceptions, and misuse of cryptographic libraries that may not be evident from external testing alone.
Comparing this approach with other options helps clarify why white-box testing is correct. Black-box testing assumes no prior knowledge of internal workings; testers interact only through exposed interfaces, much like an outsider would. Gray-box testing provides partial knowledge—perhaps some design documentation or limited credentials—offering a middle ground between realism and depth. Blind testing generally simulates an external attacker with minimal information, often focusing on real-world reconnaissance and exploitation.
Because white-box testing grants full internal visibility, it allows for highly targeted test cases. For example, testers can review critical sections of code handling authentication, then craft specific tests to exercise edge cases. They can also check whether input validation is consistently applied at all layers, verify that cryptographic keys are handled securely, and confirm that sensitive operations require proper authorization.
White-box testing is particularly useful in secure software development lifecycles (SDLCs). It can be integrated into unit testing, secure code review, and integration test phases. When combined with automated static analysis tools, white-box testing helps catch vulnerabilities early, when remediation is less costly and less disruptive.
However, white-box testing has limitations. Because it requires significant access and insider-level knowledge, it may not accurately simulate an attacker’s perspective. It may also demand more effort and specialized expertise, especially for large, complex codebases. Therefore, organizations often combine white-box with black-box and gray-box techniques to achieve a comprehensive testing strategy.
Because white-box testing uniquely involves full knowledge of internal design, code, and configuration, answer C is correct.
QUESTION 173
Which incident response document provides predefined, step-by-step procedures tailored to specific incident types, such as ransomware, data breach, or DDoS attacks?
A) Security Policy
B) Incident Response Playbook
C) Business Impact Analysis
D) Disaster Recovery Charter
Answer:
B
Explanation:
An incident response playbook is the correct answer because it contains detailed, predefined procedures for handling specific categories of incidents. SSCP candidates must understand playbooks because they translate high-level incident response policy into actionable workflows that responders can follow under pressure, reducing confusion, improving speed, and ensuring consistency.
A security policy outlines general principles, roles, and responsibilities but does not provide step-by-step technical or operational guidance. A business impact analysis focuses on understanding how disruptions affect business processes, not on responding to incidents. A disaster recovery charter may define general recovery objectives and governance but lacks detailed technical steps for specific incident types.
Incident response playbooks, on the other hand, are operational documents. For example, a ransomware playbook might include steps such as isolating affected systems, disabling certain network shares, preserving forensic evidence, identifying the ransomware variant, validating backup integrity, coordinating with legal and management, and communicating with stakeholders. A data breach playbook might include containment actions, data classification assessments, regulatory notification processes, and communication guidelines for affected parties.
Playbooks typically define:
• Detection criteria and initial triage steps
• Roles and responsibilities for responders
• Containment and eradication strategies specific to the threat
• Evidence collection and preservation procedures
• Communication plans (internal and external)
• Escalation thresholds and decision points
• Recovery and validation actions
• Post-incident review activities
By having these steps documented in advance, organizations avoid improvisation during high-stress incidents. This reduces the risk of mistakes such as destroying evidence, failing to notify regulators on time, or taking systems offline unnecessarily.
Playbooks also help ensure compliance with legal and regulatory requirements. For example, for a breach involving personal data, playbooks can include jurisdiction-specific notification timelines and content requirements. They also help align technical response actions with business priorities determined during business impact analysis and risk assessments.
Playbooks must be kept up to date. Threats evolve quickly, and procedures that were effective against older ransomware variants or DDoS tools may no longer suffice. Regular tabletop exercises and simulations allow organizations to test and refine their playbooks, ensuring they remain relevant and effective.
Because an incident response playbook provides specific, step-by-step instructions tailored to defined incident types, and none of the other options serve this function, answer B is correct.
QUESTION 174
Which identity and access management concept allows a user to authenticate once and then access multiple independent systems or applications without re-entering credentials each time?
A) Multi-Factor Authentication
B) Single Sign-On
C) Role-Based Access Control
D) Just-in-Time Access
Answer:
B
Explanation:
Single Sign-On (SSO) is the correct answer because it allows users to authenticate once with a central identity provider and then access multiple systems or applications without repeated logins. SSCP candidates must understand SSO because it improves usability, reduces password fatigue, and, when properly implemented, can enhance security by centralizing authentication and policy enforcement.
With SSO, users log in to an identity provider using their primary credentials. The identity provider then issues tokens, tickets, or assertions that other applications trust. As the user attempts to access additional systems, these tokens are presented and validated, granting access without requiring another password prompt. Common SSO technologies include Kerberos, SAML, OAuth, and OpenID Connect.
Comparing SSO with the incorrect options clarifies why option B is correct. Multi-factor authentication (MFA) strengthens authentication by requiring multiple factors (something you know, have, or are) but does not by itself provide single sign-on capability. RBAC determines what users can access based on roles, not how often they authenticate. Just-in-time access temporarily elevates privileges when needed but is not primarily about consolidating logins.
SSO offers several benefits. It reduces the number of passwords users must manage, decreasing the likelihood of password reuse or insecure storage. Centralized authentication can improve monitoring, logging, and enforcement of policies such as MFA or password complexity. It also simplifies account deactivation—disabling a single identity can cut off access to many applications simultaneously.
However, SSO can introduce risk if not properly secured. Because one set of credentials grants access to many systems, compromise of those credentials has broader consequences. Therefore, SSO implementations should be paired with strong MFA, session management controls, and anomaly detection.
SSO is widely used in enterprise environments, cloud platforms, and federated identity scenarios where multiple organizations allow cross-access. It can integrate with on-premises directories, cloud identity providers, and third-party SaaS applications.
Because SSO specifically allows users to authenticate once and then reuse that authentication across multiple systems, answer B is correct.
QUESTION 175
Which type of disaster recovery test involves simulating a disruption and walking through documented procedures without actually shutting down systems or executing failover?
A) Full Interruption Test
B) Parallel Test
C) Tabletop Exercise
D) Cutover Test
Answer:
C
Explanation:
A tabletop exercise is the correct answer because it involves simulating a disaster scenario and having participants verbally walk through their roles, decisions, and procedures without physically impacting systems or operations. SSCP candidates must understand tabletop exercises because they are a low-risk, cost-effective way to test disaster recovery plans, incident response plans, and business continuity procedures.
In a typical tabletop exercise, key stakeholders—such as IT staff, security professionals, business managers, and communications personnel—gather in a meeting setting. Facilitators present a scenario, such as a data center fire, ransomware outbreak, or widespread power outage. Participants then discuss how they would respond, referencing documented plans, identifying responsibilities, and highlighting decision points.
Unlike full interruption tests, tabletop exercises do not require shutting down production systems. Full interruption tests intentionally stop normal operations and rely entirely on recovery mechanisms, which introduces higher risk and operational impact. Parallel tests involve running recovery systems alongside production to validate readiness without fully cutting over. Cutover tests move operations entirely to the recovery environment, often as a final verification step.
Tabletop exercises help identify gaps, ambiguities, or outdated information in plans. For example, participants may realize contact lists are outdated, certain roles are unclear, or backup procedures are not well understood. These insights allow organizations to refine and improve their documentation and training before a real incident occurs.
Additionally, tabletop exercises build familiarity and confidence among team members. During an actual disaster, personnel who have rehearsed their roles are more likely to respond effectively and calmly. It also provides an opportunity for cross-functional coordination across IT, security, legal, HR, and management roles.
Tabletop exercises can be tailored to different objectives. Some focus on incident communication, others on technical recovery steps or regulatory reporting. They may be scheduled regularly—e.g., quarterly or annually—and may be required by regulatory frameworks or industry best practices.
Because a tabletop exercise specifically tests disaster recovery and incident response plans through discussion and simulation without operational disruption, answer C is correct.
QUESTION 176
Which cryptographic concept ensures that even if an attacker gains access to current encryption keys, they cannot decrypt past communications that used previous session keys?
A) Key Escrow
B) Perfect Forward Secrecy
C) Key Stretching
D) Key Escalation
Answer:
B
Explanation:
Perfect Forward Secrecy (PFS) is the correct answer because it ensures that the compromise of long-term keys does not allow an attacker to decrypt past encrypted sessions. SSCP candidates must understand PFS because it significantly improves the security of encrypted communications, particularly for protocols like TLS used to secure web traffic, VPNs, and messaging applications.
Without PFS, many protocols derive session keys directly or indirectly from a long-term key pair. If an attacker records encrypted traffic and later obtains the long-term private key (through compromise, theft, or court order), they can decrypt past sessions retroactively. This is especially problematic when encrypted traffic is stored by adversaries for later analysis.
PFS solves this by using ephemeral key exchange mechanisms—typically Diffie-Hellman or Elliptic Curve Diffie-Hellman (DHE/ECDHE). For each session, a unique temporary key is generated and used to derive session keys. Even if the long-term key or server certificate is compromised in the future, those past session keys cannot be derived, preventing decryption of historical communications.
Key escrow refers to securely storing copies of keys with a third party for recovery or lawful access—not protecting past sessions. Key stretching strengthens weak keys derived from low-entropy secrets like passwords but does not guarantee forward secrecy. Key escalation is not a standard cryptographic term.
PFS is vital for protecting privacy in the face of long-term storage of encrypted traffic. Adversaries that capture traffic today in hopes of decrypting it later cannot succeed if PFS is properly implemented. Modern versions of TLS support PFS through DHE and ECDHE cipher suites, and security best practices recommend configuring servers to prefer or require such suites.
Because perfect forward secrecy ensures that the compromise of current keys does not expose past communications, answer B is correct.
QUESTION 177
Which physical security control uses two interlocking doors that allow only one person at a time to enter a secure area, helping prevent tailgating and piggybacking?
A) Turnstile
B) Badge Reader
C) Mantrap
D) Security Camera
Answer:
C
Explanation:
A mantrap is the correct answer because it is a physical access control mechanism consisting of two interlocked doors that form a small space between a non-secure area and a secure area. Only one door can open at a time, and typically only one person is allowed inside at once. SSCP candidates must understand mantraps because they are highly effective at preventing tailgating and piggybacking, where unauthorized persons follow authorized users into restricted locations.
When an individual wants to enter a secure area, they pass through the first door into the mantrap. That door must close and lock before the second door can be opened, usually after successful authentication via badge, biometric reader, or PIN. Security staff or automated systems can verify the individual, and in some implementations, cameras or weight sensors ensure that only one person is inside the mantrap at a time.
Turnstiles can limit physical access but do not completely prevent someone from pushing through behind another person, especially in unsecured designs. Badge readers validate credentials but do not physically prevent tailgating. Security cameras record activity and may deter misconduct but cannot stop physical entry.
Mantraps are used in environments that require strong physical control, such as data centers, high-security government buildings, research laboratories, and financial institutions. They create a choke point where identity verification and additional checks can be applied before granting access to sensitive areas.
Because a mantrap specifically consists of two interlocking doors designed to prevent multiple individuals from entering together and to mitigate tailgating, answer C is correct.
QUESTION 178
Which security document formally grants an individual or system the authority to access specific resources under defined conditions and constraints?
A) Security Policy
B) Access Control List
C) Authorization Statement
D) Service-Level Agreement
Answer:
C
Explanation:
An authorization statement is the correct answer because it formally grants permission to an individual, process, or system to access defined resources under specified conditions. SSCP candidates must understand authorization because it is a central component of access control, determining what authenticated entities are allowed to do.
While access control lists technically define permissions at a technical level, the authorization statement is the formal decision or record that such access is approved. It might be documented through written approvals, system configuration specifications, or formal access requests that are reviewed and signed by appropriate authorities.
A security policy provides overarching rules and guidelines but does not itself grant specific access to individuals. An access control list is a technical mechanism used in systems like file permissions or firewall rules but is typically an implementation of authorization decisions rather than the formal approval itself. A service-level agreement focuses on performance and service terms, not security authorization.
Authorization statements are often part of access request workflows. For example, when an employee needs access to a financial application, they submit a request that is reviewed by their manager and the system owner. Once approved, that authorization is recorded and then implemented via access control systems.
Proper documentation of authorization is important for accountability and auditability. During audits, organizations must demonstrate that access assignments were properly reviewed and approved, not granted arbitrarily. This is especially important for privileged access, where excessive rights could lead to misuse or major incidents.
Because an authorization statement specifically records and defines who is allowed to access what, under what conditions, and with whose approval, answer C is correct.
QUESTION 179
Which security concept focuses on ensuring that security controls do not excessively interfere with business operations, usability, or productivity?
A) Security Through Obscurity
B) Operational Feasibility
C) Defense in Depth
D) Security-Usability Balance
Answer:
D
Explanation:
Security-usability balance is the correct answer because it addresses the need to implement strong security controls without making systems so difficult to use that users circumvent or disable protections. SSCP candidates must understand this concept because overly rigid security can drive users to insecure workarounds, ultimately weakening the security posture.
For example, if password policies are excessively complex and require frequent changes, users may write passwords down or reuse them across systems. If access controls are too restrictive, employees may store data on unauthorized personal devices or cloud services to get their work done. If multi-factor authentication is implemented in a way that constantly frustrates users, they may resist adoption or look for ways to bypass it.
Operational feasibility (option B) relates to whether a solution can be practically implemented but does not explicitly address the tradeoff between security and usability from the user’s perspective. Defense in depth refers to layered controls but says nothing about the user experience. Security through obscurity relies on hiding implementation details, which is considered a weak security strategy.
Finding the right security-usability balance involves understanding user workflows, business needs, and the risk environment. Controls should be strong enough to mitigate threats but aligned with how people actually perform their work. Involving users and business units in the design and implementation of security measures helps ensure that controls support, rather than hinder, operations.
Techniques for improving this balance include using single sign-on with strong backend controls, deploying user-friendly multi-factor authentication methods, designing clear and intuitive security prompts, and automating security tasks where possible. Training and communication also play important roles, helping users understand why controls exist and how to comply with them effectively.
Because the security-usability balance specifically focuses on ensuring security does not obstruct business operations or usability, and none of the other options capture that concept fully, answer D is correct.
QUESTION 180
Which risk treatment option involves eliminating the activity that generates the risk, rather than trying to mitigate, transfer, or accept it?
A) Risk Mitigation
B) Risk Transfer
C) Risk Avoidance
D) Risk Acceptance
Answer:
C
Explanation:
Risk avoidance is the correct answer because it involves making a conscious decision to stop or not engage in an activity that creates risk, thereby removing the risk entirely. SSCP candidates must understand risk avoidance as one of the primary risk treatment strategies, alongside mitigation, transfer, and acceptance.
For example, if hosting a public-facing application exposes an organization to unacceptable levels of attack risk and compliance obligations, the organization might choose not to host the application at all or to discontinue the service. If processing a certain type of sensitive data is too risky relative to its business value, the organization might decide not to collect or store that data in the first place.
Risk mitigation involves reducing the likelihood or impact of a risk by implementing controls such as firewalls, encryption, and access controls. Risk transfer shifts some or all of the financial impact to a third party, such as through cyber insurance or outsourcing certain operations. Risk acceptance means formally acknowledging the risk without additional controls because it falls within tolerable levels.
Risk avoidance is often the most effective treatment from a pure security perspective because a risk that does not exist cannot be exploited. However, it may have significant business implications, as avoiding risk often means foregoing certain opportunities, services, or efficiencies. Therefore, organizations must consider the tradeoffs between security and business objectives.
A thorough risk assessment and business impact analysis inform whether avoidance is appropriate. When a risk’s cost or potential damage far exceeds the value of the associated activity, avoidance may be the most rational decision.
Because risk avoidance specifically refers to eliminating the risk by stopping or not starting the risky activity, and this approach is distinct from mitigation, transfer, and acceptance, answer C is correct.