ISC SSCP System Security Certified Practitioner (SSCP) Exam Dumps and Practice Test Questions Set 7 121-140

Visit here for our full ISC SSCP exam dumps and practice test questions.

QUESTION 121

Which backup strategy creates a real-time or near–real-time copy of data to a secondary location, minimizing data loss and supporting aggressive recovery objectives?

A) Cold Backup
B) Continuous Data Protection
C) Differential Backup
D) Media Rotation

Answer:

B

Explanation:

Continuous Data Protection (CDP) is the correct answer because it offers real-time or near–real-time replication of data, enabling organizations to restore systems with extremely low Recovery Point Objectives (RPOs). SSCP candidates must understand CDP because it represents one of the most advanced data protection techniques in modern enterprise environments, handling rapid data changes and supporting resiliency against accidental deletion, corruption, ransomware, and catastrophic system failures.

CDP works by capturing every change made to a file or dataset and immediately transmitting that change to a secondary storage repository. Instead of taking periodic snapshots or scheduled backups, CDP continuously monitors I/O operations and records modifications in a journal, allowing administrators to recover data at virtually any point in time. This eliminates the risk of losing hours of data between scheduled backups—an essential capability for organizations that handle high-volume transactions, dynamic datasets, and mission-critical workloads.

To understand why CDP is correct, it helps to evaluate backup timing. Traditional differential or incremental backups might occur every night or every few hours. If a failure happens shortly before the next scheduled backup, all changes since the last backup are lost. CDP solves this problem by continuously capturing data changes the moment they occur, thereby reducing potential data loss to seconds or milliseconds. For environments such as financial institutions, e-commerce platforms, or healthcare systems, this is essential for maintaining trust, accuracy, and operational continuity.

Comparing CDP to the incorrect answers clarifies why those options are unsuitable. A cold backup (option A) refers to offline backup processes usually performed when systems are powered down; they certainly do not provide real-time replication. Differential backups (option C) store changes since the last full backup but only run periodically. They cannot provide second-by-second restoration points. Media rotation (option D) refers to switching backup media over a schedule—for example, tapes—but it does not define a real-time or continuous process.

CDP also plays a critical role in modern cybersecurity. Ransomware attacks often encrypt large portions of storage. If an organization relies solely on daily backups, an attack that occurs midday could wipe out many hours of work. CDP provides granular rollback options, allowing recovery to a point just before the ransomware executed. This capability dramatically reduces downtime and prevents catastrophic data loss.

In addition, CDP can support compliance requirements for industries that must maintain precise records. Regulatory frameworks such as PCI DSS, HIPAA, SOX, and financial industry standards demand strong protections against data loss and corruption. CDP ensures data accuracy and auditability by preserving historical versions and enabling fine-grained recovery.

However, CDP is not without challenges. Continuous replication consumes network bandwidth, requires robust storage hardware, and may introduce performance overhead if not properly tuned. Organizations must ensure that CDP infrastructure is itself resilient—if the secondary storage is compromised, CDP must fail over safely. Encryption, access controls, and integrity checks are essential components of a secure CDP deployment.

For environments requiring minimal data loss and rapid recovery, no other backup method provides the same granularity and immediacy as Continuous Data Protection. For these reasons, answer B is correct.

QUESTION 122

Which wireless security protocol improves upon WPA2 by introducing individualized encryption, stronger cryptographic primitives, and protections against brute-force attacks?

A) WEP
B) WPA3
C) WPA2-PSK
D) Open Authentication

Answer:

B

Explanation:

WPA3 is the correct answer because it enhances wireless security far beyond earlier protocols, addressing vulnerabilities and cryptographic weaknesses that attackers exploit in WPA2. SSCP candidates must fully understand WPA3 because wireless networks remain one of the most common attack vectors, and many organizations continue to transition their wireless infrastructure to this more secure standard.

WPA3 introduces several improvements, beginning with its use of Simultaneous Authentication of Equals (SAE), also known as the Dragonfly key exchange. SAE replaces the older Pre-Shared Key (PSK) method used in WPA2-Personal. In PSK mode, passwords are vulnerable to offline dictionary attacks because captured handshake traffic can be brute-forced without interacting with the network again. SAE eliminates this vulnerability by performing a more sophisticated handshake resistant to offline cracking. Attackers cannot simply capture handshake packets and attempt unlimited password guesses; instead, each guess requires a new interaction with the access point, drastically increasing the difficulty of brute-force attempts.

WPA3 also enhances encryption. In WPA2-Personal, all devices on the same wireless network share the same encryption key once authenticated. This means that a compromised device can sometimes decrypt another device’s traffic. WPA3 solves this with Individualized Data Encryption (IDE), ensuring that each device has its own encryption key, improving both privacy and isolation.

Additionally, WPA3-Enterprise increases the minimum encryption strength to 192-bit security, aligning with the Commercial National Security Algorithm (CNSA) Suite. This provides high-assurance protection for government, healthcare, and financial institutions that require robust cryptographic defenses.

Comparing WPA3 with the incorrect options clarifies why those choices are not adequate. WEP (option A) is extremely insecure due to flawed initialization vectors and predictable key structures; modern systems should never use it. WPA2-PSK (option C) uses stronger encryption than WEP but remains vulnerable to key-reinstallation attacks (KRACK) and offline dictionary attacks. Open authentication (option D) provides no encryption at all, leaving networks exposed to eavesdropping, session hijacking, and malicious access.

WPA3 further offers enhancements such as Protected Management Frames (PMF), which protect critical management traffic (like deauthentication frames) from spoofing attacks. Without PMF, attackers can force clients offline to perform man-in-the-middle or denial-of-service attacks. WPA3 also adds forward secrecy, ensuring that even if long-term keys are compromised, past session traffic remains secure.

While WPA3 is superior, older devices may lack support, and mixed WPA2/WPA3 mode can reduce some protections. Organizations should upgrade clients and infrastructure to take full advantage of the new capabilities.

Because WPA3 introduces advanced authentication, individualized encryption, strong cryptography, and protections against common wireless attacks, answer B is correct.

QUESTION 123

Which security testing method evaluates applications by executing them in a runtime environment to identify vulnerabilities that only become visible during operation?

A) Static Analysis
B) Dynamic Analysis
C) Code Review
D) Change Control Testing

Answer:

B

Explanation:

Dynamic Analysis is the correct answer because it evaluates applications while they are actively running, allowing testers to observe real-time behavior, execution flows, memory use, and interactions with external systems. SSCP candidates must understand dynamic analysis because many vulnerabilities only manifest when an application is operational—such as runtime memory errors, injection flaws, authentication bypasses, state-handling issues, and configuration weaknesses.

Unlike static analysis, which examines source code or binaries without executing them, dynamic analysis places the application in a controlled execution environment. Testers simulate user interactions, attempt malicious inputs, analyze responses, and observe system calls. This approach provides visibility into issues that static analysis may miss, such as environmental dependencies, misconfigurations, race conditions, and improper session management.

Dynamic analysis tools—often referred to as Dynamic Application Security Testing (DAST) solutions—can detect cross-site scripting, SQL injection, insecure redirects, missing security headers, and improper error handling. These tools identify vulnerabilities in web applications by crawling pages, submitting crafted payloads, and analyzing HTTP responses for signs of weaknesses.

Dynamic analysis is essential in quality assurance environments where the application runs in a near-production state. This allows realistic evaluation of how the application handles authentication, authorization, and input validation, especially when interacting with databases, APIs, or file systems. It also tests how the application behaves under load, stress, or unusual operational conditions.

Comparing dynamic analysis to incorrect options highlights why those alternatives do not fit. Static Analysis (option A) examines code without running it and cannot detect runtime-dependent flaws. Code Review (option C) is a manual examination of source code, valuable but limited to human interpretation. Change Control Testing (option D) verifies that approved system changes function correctly but is not focused specifically on analyzing runtime vulnerabilities.

Dynamic analysis is especially important for detecting memory-related vulnerabilities in compiled programs. Issues like buffer overflows, heap corruption, and use-after-free errors are often invisible in static reviews but become evident during execution. Modern dynamic analysis platforms may incorporate debugging hooks, memory instrumentation, taint tracking, and system call monitoring.

Furthermore, dynamic analysis plays a critical role in secure DevOps (DevSecOps). Automated runtime testing in continuous integration pipelines ensures vulnerabilities are caught early before deployment. Cloud-native environments also benefit because dynamic analysis can evaluate containerized applications, microservices, and serverless functions during execution.

Because dynamic analysis provides essential insight into vulnerabilities that appear only during application operation, and because it complements but does not replace static methods, answer B is correct.

QUESTION 124

Which type of malware spreads without user interaction by exploiting vulnerabilities in operating systems or network services, often causing rapid, widespread infection?

A) Trojan
B) Worm
C) Spyware
D) Rootkit

Answer:

B

Explanation:

A worm is the correct answer because it is a self-replicating form of malware that spreads automatically across networks by exploiting vulnerabilities, requiring no user action to propagate. SSCP candidates must deeply understand worms because they represent some of the most destructive and fast-moving malware in cybersecurity history, capable of compromising thousands or even millions of systems in a short period.

Worms differ fundamentally from other malware types. A Trojan requires the user to run a malicious file; spyware covertly monitors user activity; rootkits hide malicious activity by modifying system components. Worms, however, identify vulnerable systems and spread autonomously, making them uniquely capable of large-scale disruption.

Historically significant worms—such as Code Red, SQL Slammer, Conficker, and WannaCry—demonstrate the catastrophic potential of automated propagation. Some worms infect systems within seconds of discovering a vulnerability. For instance, SQL Slammer spread so quickly in 2003 that it doubled its number of infected hosts every 8.5 seconds. WannaCry exploited the EternalBlue vulnerability in SMBv1, encrypting files across global networks within hours.

Worms typically follow a multi-stage process:

Scanning: Worms scan IP ranges to identify vulnerable targets.

Exploitation: They exploit specific vulnerabilities, such as buffer overflows or misconfigurations.

Replication: Once inside a new system, they copy their payload and begin scanning again.

Execution: Worms may drop backdoors, ransomware, botnet agents, or other malicious components.

Because worms exploit vulnerabilities, patch management is a critical defense. Systems lacking timely security updates become easy targets. Network segmentation using VLANs, firewalls, and IPS solutions also helps limit worm spread by restricting lateral movement.

The incorrect options do not possess autonomous replication capabilities:
• A Trojan (A) disguises itself as legitimate software but does not self-propagate.
• Spyware (C) collects data secretly but relies on user actions for installation.
• Rootkits (D) conceal malicious activity but do not independently replicate.

Worms often incorporate multiple attack components. Many modern worms combine worm-like propagation with ransomware encryption, credential harvesting, or botnet formation. Advanced worms may also use polymorphism, altering their code signatures to evade detection by antivirus systems.

Because worms propagate independently, exploit vulnerabilities automatically, and create widespread infection at high speed, answer B is correct.

QUESTION 125

Which physical security control restricts entry to sensitive areas by requiring two or more independent mechanisms—such as badge plus PIN—to grant access?

A) Single-Factor Authentication
B) Mantrap
C) Multi-Factor Physical Access Control
D) Access Logs

Answer:

C

Explanation:

Multi-Factor Physical Access Control is the correct answer because it requires two or more independent authentication methods—such as something the person has (an access card), something the person knows (a PIN), or something the person is (a biometric)—before granting entry to a secured area. SSCP candidates must understand this concept because physical breaches can be as damaging as cyber intrusions, providing attackers with direct access to servers, network devices, and sensitive documents.

Physical multi-factor authentication mimics the principles of digital MFA but applies them to real-world access points. For example, entering a data center might require a smart card swipe (possession factor), entering a personal PIN (knowledge factor), and passing a biometric scan (inherence factor). Requiring multiple independent factors significantly reduces the risk of unauthorized entry, even if one method is compromised.

Many attacks on physical facilities stem from stolen badges, tailgating, social engineering, or insider threats. Multi-factor physical controls mitigate these risks. A stolen badge alone cannot unlock the door if a PIN or biometric verification is also required. Likewise, an attacker guessing a PIN cannot gain entry without the corresponding badge.

Comparing the correct answer with the alternatives highlights why those choices are insufficient. Single-factor authentication (A) uses only one method, such as a badge swipe, making it vulnerable to theft or cloning. A mantrap (B) is a physical structure designed to prevent tailgating or force verification but does not inherently require multiple authentication methods. Access logs (D) record entries and exits but do not prevent unauthorized access.

Multi-factor physical access controls are essential for compliance frameworks like PCI DSS, HIPAA, FISMA, and ISO 27001. These standards often mandate enhanced physical protections for data centers, server rooms, and areas storing sensitive information. Organizations must restrict entry not only to authorized personnel but also ensure mechanisms exist to verify identity with high assurance.

Modern physical access control systems integrate multiple technologies:
• Smart cards with encrypted chips
• PIN pads with anti-shoulder-surfing features
• Biometric scanners for fingerprints, iris patterns, or facial recognition
• Mobile authentication apps with one-time tokens
• Video surveillance for additional verification

Effective multi-factor systems also include anti-tampering protections, encrypted communication between access devices, fail-secure door mechanisms, and emergency override policies.

Because multi-factor physical access control provides robust protection by requiring multiple independent authentication elements before granting entry, answer C is correct.

QUESTION 126

Which logging mechanism collects events from multiple systems into a centralized repository, improving correlation, monitoring, and incident response efficiency?

A) Local Logging
B) Centralized Log Management
C) Debug Logging
D) Print Spooling

Answer:

B

Explanation:

Centralized log management is the correct answer because it consolidates logs from various systems such as servers, endpoints, firewalls, databases, applications, and cloud platforms into one unified location. This consolidation dramatically improves visibility across the environment, enabling organizations to detect threats earlier, analyze patterns, investigate security incidents, and maintain compliance. In contrast, local-only logging leaves logs scattered across individual systems, making investigations slow, inconsistent, and vulnerable to tampering.

A key advantage of centralized logging is event correlation. Many modern cyberattacks do not produce a single obvious alert but instead leave subtle traces across multiple systems. For example, a compromised user account might show failed logins on a VPN appliance, unusual access attempts on a file server, and suspicious outbound network connections. Individually, these logs may seem insignificant, but when combined through a central logging platform, they reveal a coordinated attack pattern. Security teams can detect issues significantly faster by correlating logs across sources.

Centralized logging also forms the backbone of Security Information and Event Management (SIEM) systems. SIEM platforms ingest logs, normalize formats, apply threat detection rules, enrich logs with external intelligence, and trigger alerts for incidents that require investigation. Without centralized log collection, a SIEM would lack the visibility needed to identify multi-stage attacks or insider threats.

Another critical benefit is forensic integrity. When logs remain only on local devices, attackers who compromise those systems can modify or delete evidence. Centralized logging transmits logs to a remote system that attackers cannot easily access, preserving the integrity and authenticity of those records. Many centralized logging solutions also support hashing, digital signatures, and write-once storage methods that make unauthorized tampering detectable.

Comparing the correct answer with the incorrect options makes the choice clear. Local logging leaves logs scattered and difficult to manage. Debug logging focuses on technical troubleshooting, not security monitoring. Print spooling manages printing processes and has no relation to log analysis or security. Only centralized log management delivers the visibility and protection required for effective monitoring.

Centralized logs also support compliance frameworks such as PCI DSS, HIPAA, SOX, and ISO 27001, all of which require detailed audit trails and centralized monitoring for sensitive systems. Regulators expect organizations to demonstrate that they can detect anomalies and investigate incidents effectively. Without centralized logging, maintaining this level of oversight is extremely challenging.

Incident response becomes significantly more efficient with centralized logs. Responders can quickly search across all systems for indicators of compromise, pivot between log types, and reconstruct timelines accurately. Instead of manually logging into dozens of machines, investigators use unified dashboards and query tools that provide a complete picture of activity across the environment.

Additionally, centralized logging improves operational performance. Administrators can detect system failures, misconfigurations, performance bottlenecks, and policy violations early. They can also ensure time synchronization across systems using protocols like NTP, which is essential for accurate event reconstruction.

Because centralized log management uniquely provides unified visibility, the ability to correlate events across multiple systems, enhanced forensic integrity, and efficient incident investigation, answer B is correct.

QUESTION 127

Which cloud security practice ensures that data stored in the cloud remains protected even if a provider’s storage system is compromised, by encrypting data before it is uploaded?

A) Server-Side Encryption
B) Transport Layer Security
C) Client-Side Encryption
D) Key Escrow

Answer:

C

Explanation:

Client-side encryption is the correct answer because it ensures that data is encrypted before it leaves the user’s system, meaning the cloud provider never has access to the plaintext data or the encryption keys. This approach provides the highest level of confidentiality in cloud environments because even if the provider’s infrastructure is compromised, the attacker cannot read the data without the keys. In the shared responsibility model, the provider is responsible for securing the infrastructure, while the customer is responsible for protecting data. Client-side encryption places full control of data confidentiality in the hands of the customer.

To understand why client-side encryption is superior, it helps to compare it with server-side encryption. With server-side encryption, the cloud provider encrypts data after receiving it. Although server-side encryption protects data at rest, the provider still holds the encryption keys, so a compromise of the provider’s internal systems, an insider threat, or a misconfiguration could expose the plaintext. Client-side encryption avoids this risk by preventing the provider from ever seeing the decrypted content.

Transport Layer Security, or TLS, protects data in transit but does nothing to secure data once stored in the cloud. Therefore, it cannot protect against threats targeting data at rest. Key escrow involves storing copies of keys with a trusted party and is often used in regulated environments or lawful access scenarios, but key escrow does not itself encrypt the data or guarantee that providers cannot access it.

Client-side encryption is especially important for industries handling sensitive data, such as healthcare, finance, legal, and government sectors. Regulations such as HIPAA, PCI DSS, and GDPR often require strict control over who can decrypt sensitive information. By encrypting data before uploading it, organizations reduce compliance risk and protect data against accidental exposure, misconfigurations, and unauthorized access.

One significant advantage of client-side encryption is that it prevents cloud misconfigurations from exposing usable data. Public cloud storage bucket misconfigurations are a leading cause of data breaches. However, if the data inside the bucket is encrypted on the client side, attackers who access the bucket cannot read the contents.

That said, client-side encryption introduces responsibilities for secure key management. Losing encryption keys means losing access to the encrypted data permanently. Organizations must implement strong, redundant key management systems, including secure key storage, rotation policies, access controls, and auditing. Without these safeguards, the benefits of client-side encryption can quickly turn into operational risk.

Client-side encryption also enhances privacy. Cloud providers cannot analyze or mine encrypted data because it is unreadable. This protects organizations from insider threats and ensures confidentiality, even against administrators with elevated privileges.

Because client-side encryption maintains confidentiality independently of the cloud provider, protects against provider-side breaches, prevents unauthorized access even in cases of misconfiguration, and strengthens compliance posture, answer C is correct.

QUESTION 128

Which access control model uses security labels assigned to both subjects and objects, enforcing strict rules based on classification levels such as confidential, secret, and top-secret?

A) Role-Based Access Control
B) Mandatory Access Control
C) Discretionary Access Control
D) Attribute-Based Access Control

Answer:

B

Explanation:

Mandatory Access Control (MAC) is the correct answer because it uses classification labels and predefined security policies to strictly regulate which users (subjects) can access which resources (objects). SSCP candidates must understand MAC because it represents the most structured and rigid access control model, commonly used in military, government, and intelligence environments where confidentiality is critical.

In MAC environments, subjects receive clearance levels such as confidential, secret, or top-secret. Objects such as files, databases, and systems are labeled with classifications. Access is determined by the relationship between the subject’s clearance and the object’s classification. For example, a user with secret clearance can access secret and confidential resources but not top-secret ones. The policies governing these rules are not controlled by end users but are centrally enforced by the system and security administrators.

MAC systems implement two main principles:

No read-up: Subjects cannot read data classified above their clearance.

No write-down: Subjects cannot write data to lower classifications to prevent leaks.

This structure prevents unauthorized disclosure by ensuring information flows only in permitted directions. This is essential in environments where confidentiality breaches could jeopardize national security or sensitive operations.

Comparing MAC to the other models highlights why the alternatives are incorrect. Role-Based Access Control (RBAC) assigns permissions based on job roles but does not enforce classification-based restrictions. Discretionary Access Control (DAC) allows resource owners to determine who may access their resources, which is unsuitable for high-security environments because users can share information at their discretion. Attribute-Based Access Control (ABAC) uses attributes such as department, location, or time of day to determine access and is more dynamic than MAC but not classification-based.

MAC requires strict administrative oversight. Only highly privileged administrators can modify classifications or clearance levels. Regular users cannot override or share access because the model removes discretion entirely. This prevents insider threats from sharing sensitive information intentionally or accidentally.

MAC systems often implement the Bell-LaPadula model for confidentiality or the Biba model for integrity. Bell-LaPadula enforces no read-up, no write-down rules to maintain confidentiality. Biba enforces no read-down, no write-up rules to protect data integrity. These models underpin how MAC environments prevent unauthorized access and limit information leaks.

MAC is beneficial in environments where the risk of data compromise is unacceptable. Government intelligence agencies, defense contractors, nuclear facilities, and high-security research labs use MAC to protect classified information. The rigor of MAC ensures that access decisions cannot be altered by ordinary users, providing high assurance that security policies are enforced consistently.

Because MAC enforces strict access controls based on classification labels and prevents discretionary sharing, answer B is correct.

QUESTION 129

Which incident response activity focuses on restoring affected systems to normal operating conditions after a security incident has been contained and eradicated?

A) Identification
B) Containment
C) Recovery
D) Preparation

Answer:

C

Explanation:

Recovery is the correct answer because it represents the phase of incident response where systems, services, and operations are restored to normal functioning after an incident has been contained and the root cause eliminated. SSCP candidates must understand the recovery phase fully because restoring system integrity, verifying configurations, and ensuring continued availability are essential for minimizing operational downtime and preventing recurring incidents.

The recovery phase comes after containment and eradication. Containment limits the damage and prevents further spread, and eradication removes the malicious components such as malware, compromised accounts, or exploited vulnerabilities. However, systems may still be offline, unstable, or in a degraded state. Recovery focuses on safely bringing them back online.

Recovery activities include restoring data from backups, rebuilding infected systems, reinstalling operating systems, applying patches, reconfiguring security settings, validating system integrity, and gradually reintroducing resources into production environments. Careful sequencing is required to avoid reintroducing contaminated systems or restoring compromised configurations.

One key objective of recovery is preventing recurrence. If a vulnerability allowed the initial attack, it must be patched before the system is brought back online. If a misconfiguration allowed unauthorized access, it must be corrected. Logs must be reviewed to confirm attackers did not leave persistent backdoors. Network traffic must be monitored for signs of continuing activity.

Comparing recovery with the incorrect options helps clarify the distinction. Identification is the earliest stage, where abnormal behavior is detected and confirmed as an incident. Containment stops the attack from spreading. Preparation includes planning, training, and creating incident response procedures. Only recovery focuses specifically on restoring normal operations.

Recovery also includes careful validation procedures. Systems must be tested to ensure stability and proper functionality. Administrators often restore systems in isolated network segments before returning them to production. This minimizes risk if remnants of the attack remain undetected.

Backup integrity is critical during recovery. If backups are corrupted or contain malware, restoring them could reintroduce the threat. Therefore, backups must be scanned, verified, and validated.

Communication is another important part of recovery. Stakeholders must be informed about system status, downtime expectations, and restoration timelines. In regulated industries, organizations may be required to notify regulators or customers once systems are restored.

Finally, recovery transitions into lessons learned, where organizations document what occurred, how the response unfolded, and what changes are needed. The insights gained improve future readiness and strengthen defenses.

Because the recovery phase specifically restores systems and operations after an incident has been resolved, answer C is correct.

QUESTION 130

Which vulnerability scanning type evaluates systems without authentication, providing only the perspective an external attacker would have when probing a target?

A) Credentialed Scan
B) Uncredentialed Scan
C) Compliance Scan
D) Patch Audit

Answer:

B

Explanation:

An uncredentialed scan is the correct answer because it assesses systems without using valid login credentials, meaning the scanner interacts with the target exactly as an outsider or attacker would. SSCP candidates must understand this type of scan because it reveals what vulnerabilities, misconfigurations, and exposures are visible to anyone on the network or internet, making it critical for evaluating perimeter defenses and external attack surfaces.

Uncredentialed scans attempt to access services, enumerate ports, probe for banners, send crafted packets, and test for common vulnerabilities without privileged access. They identify weak passwords, outdated services, publicly exposed APIs, application vulnerabilities, insecure protocols, open ports, and lack of proper filtering. This scanning type simulates an attacker’s reconnaissance phase, giving security teams insight into what an adversary can detect before launching deeper exploitation attempts.

Unlike credentialed scans, uncredentialed scans cannot log into systems to inspect internal configurations or installed patches. Therefore, the visibility is limited. However, this limitation is intentional: the goal is to mimic the capabilities of an external threat actor. Uncredentialed scans are especially important for testing firewalls, intrusion detection systems, web applications, and public-facing systems.

Comparing the correct answer with the alternatives highlights the distinction. Credentialed scans use valid login credentials, allowing deeper inspection of system configurations, installed patches, registry settings, and security policies. Compliance scans check whether systems adhere to standards such as CIS benchmarks, PCI DSS, or HIPAA requirements. Patch audits specifically verify whether systems have required patches installed.

Uncredentialed scans help organizations identify weak perimeter defenses, allowing them to close exposed ports, disable unnecessary services, enforce authentication mechanisms, and harden external-facing systems. They can also detect vulnerabilities such as outdated web servers, insecure default configurations, directory traversal flaws, missing security headers, and services that disclose too much information through banners.

Uncredentialed scanning is particularly important for organizations hosting public web applications or exposing services to the internet. Many real-world breaches begin with attackers scanning internet-facing systems for weaknesses. If organizations fail to detect and remediate these issues through their own uncredentialed scanning, attackers may exploit them first.

Because uncredentialed scanning provides the same limited visibility as an external attacker and focuses on identifying externally exposed vulnerabilities, answer B is correct.

QUESTION 131

Which authentication method verifies identity based on unique biological characteristics such as fingerprints, iris patterns, or facial geometry?

A) Token Authentication
B) Biometric Authentication
C) Password Authentication
D) Certificate Authentication

Answer:

B

Explanation:

Biometric authentication is the correct answer because it verifies a user’s identity based on inherent physical traits that are extremely difficult to forge, steal, or duplicate. SSCP candidates must understand the advantages, limitations, and operational requirements of biometrics because they are increasingly used in enterprise environments, physical access control systems, mobile devices, and high-security facilities. Unlike passwords or tokens, biometrics rely on characteristics that are unique to each individual, making them a strong authentication factor.

Biometrics are categorized into two types: physiological and behavioral. Physiological biometrics include fingerprints, iris or retinal scans, facial recognition, palm vein patterns, and DNA analysis. Behavioral biometrics involve patterns of behavior, such as typing rhythm, gait analysis, voice patterns, and touch dynamics. Physiological biometrics are generally more accurate and stable because physical traits do not change significantly over time, whereas behavioral traits may fluctuate.

One of the major advantages of biometrics is that they cannot be easily lost or forgotten. Users often forget passwords or lose access cards, but their physical traits remain with them at all times. This increases both security and convenience. Additionally, biometrics reduce risks associated with credential sharing. A password can be written down or intentionally disclosed, but a fingerprint or iris pattern cannot be shared in the same way.

Comparing biometrics to the incorrect answer choices clarifies why it is the correct choice. Token authentication uses physical devices such as smart cards or OTP tokens but does not involve biological traits. Password authentication relies on knowledge factors, which are vulnerable to guessing attacks, phishing, and shoulder surfing. Certificate authentication depends on cryptographic certificates installed on devices, which authenticate systems or users but do not involve biometric characteristics.

Despite their strengths, biometrics also introduce unique challenges. Storing biometric data raises privacy concerns because biometric identifiers, once compromised, cannot be changed like passwords. If a fingerprint database is breached, users cannot simply replace their fingerprints. Therefore, organizations must store biometric templates securely, often through hashing or encryption, and ensure that raw biometric data is never accessible to attackers.

False acceptance rates (FAR) and false rejection rates (FRR) must also be managed carefully. An overly strict biometric system may deny access to legitimate users, while an overly lenient threshold may allow unauthorized individuals. Additionally, environmental factors—such as lighting conditions for facial recognition or injuries affecting fingerprints—can influence accuracy.

Biometric authentication is also widely used in physical security systems. Many data centers, government buildings, and research facilities combine biometrics with other factors such as PINs or access cards to implement multi-factor physical authentication. This greatly reduces the risk of unauthorized entry.

Because biometrics provide identity verification using unique, inherent human characteristics and are difficult to counterfeit or steal, they serve as one of the strongest authentication mechanisms. For these reasons, answer B is correct.

QUESTION 132

Which type of attack involves intercepting communication between two parties to steal data, modify messages, or impersonate one of the communicating entities?

A) Denial-of-Service Attack
B) Man-in-the-Middle Attack
C) Brute-Force Attack
D) SQL Injection

Answer:

B

Explanation:

A man-in-the-middle (MITM) attack is the correct answer because it involves an attacker secretly intercepting, relaying, and possibly altering communication between two parties who believe they are communicating directly with one another. SSCP candidates must understand MITM attacks thoroughly because they target core principles of secure communication—confidentiality, integrity, and authenticity—and can occur across wired networks, wireless systems, and application-layer protocols.

In a typical MITM attack, an attacker positions themselves between a client and a server, allowing them to observe all exchanged data. This can be achieved through several techniques, including ARP spoofing, DNS spoofing, rogue Wi-Fi access points, session hijacking, or SSL stripping. Once positioned in the communication path, the attacker can capture sensitive data such as credentials, financial transactions, personal messages, or authentication tokens.

One of the most common MITM vectors is ARP spoofing on local networks. Attackers manipulate ARP tables to convince a victim’s device that the attacker’s MAC address corresponds to the gateway IP address. As a result, all traffic routes through the attacker. Another common MITM tactic is setting up a fake Wi-Fi access point that appears legitimate. Unsuspecting users connect to it, giving attackers full visibility into unencrypted traffic.

Comparing MITM with the incorrect options clarifies why it is the best answer. Denial-of-service attacks focus on making systems unavailable, not intercepting data. Brute-force attacks attempt to guess passwords by systematically trying combinations. SQL injection attacks target databases by injecting malicious code into application inputs. Only MITM involves interception and manipulation of communications.

MITM attacks are especially dangerous on unsecured or poorly secured networks. For example, attackers can use SSL stripping to downgrade encrypted HTTPS sessions to unencrypted HTTP connections. If users do not notice the absence of encryption, attackers can harvest sensitive information. Attackers may also inject malware, modify content, or manipulate transaction details in financial systems.

Preventing MITM attacks involves implementing multiple layers of defense. Strong encryption protocols such as TLS help ensure that even intercepted data cannot be decrypted without the proper keys. Certificate pinning ensures clients only accept certificates from trusted sources. DNSSEC protects against DNS spoofing. WPA3 strengthens protections on wireless networks. Network segmentation, intrusion detection systems, and endpoint security tools can also identify suspicious ARP or DNS behavior.

MITM attacks also have implications for authentication. If session tokens or cookies are intercepted, attackers may hijack user sessions. Therefore, organizations must use secure cookies, implement multi-factor authentication, and invalidate sessions once suspicious activity is detected.

Because man-in-the-middle attacks rely on intercepting and possibly altering communications without the knowledge of either party, and because no other option describes this type of attack, answer B is correct.

QUESTION 133

Which disaster recovery site provides fully equipped hardware, networking, and system resources that can take over operations immediately after a primary site fails?

A) Warm Site
B) Cold Site
C) Hot Site
D) Mutual Aid Agreement

Answer:

C

Explanation:

A hot site is the correct answer because it is a fully operational disaster recovery location equipped with all necessary hardware, software, network connectivity, and real-time data replication capabilities to take over operations immediately following a primary site outage. SSCP candidates must understand the characteristics of hot sites because they represent the highest level of disaster recovery readiness, suitable for organizations with extremely low tolerance for downtime or data loss.

Hot sites are continuously maintained to mirror production environments as closely as possible. They often include synchronized storage systems, duplicate application servers, redundant network infrastructure, and automated failover mechanisms. Because data replication occurs in real time or near real time, Recovery Point Objectives (RPO) are extremely small, often close to zero. Recovery Time Objectives (RTO) are also minimal, sometimes measured in minutes.

Comparing a hot site with the incorrect options highlights its advantages. A cold site (B) provides only physical space and basic utilities; it contains no hardware or preconfigured systems, meaning recovery could take days or weeks. A warm site (A) includes some hardware and partially configured systems but requires manual updates and installation before becoming fully operational. A mutual aid agreement (D) involves two organizations assisting each other during failures and does not guarantee immediate operational readiness.

Operating a hot site requires significant investment. Organizations must purchase duplicate infrastructure, pay for continuous maintenance, and ensure real-time synchronization. For example, a financial trading firm or a hospital cannot tolerate extended downtime, making a hot site essential for operational continuity. Industries such as banking, healthcare, e-commerce, and critical infrastructure often rely on hot sites because any interruption can lead to financial loss, safety risks, or compliance violations.

Hot sites also enable seamless continuity during natural disasters, cyberattacks, power outages, or system failures. If ransomware infects the primary site, the organization can fail over to the hot site, ensuring availability while isolating and recovering the primary environment. This redundancy helps reduce the impact of catastrophic failures.

Hot sites are commonly incorporated into business continuity plans and disaster recovery strategies. Regular testing and failover exercises ensure that systems function correctly and that staff know how to activate the hot site if needed. Automated monitoring tools help ensure replication integrity and detect synchronization issues early.

Because hot sites provide immediate operational readiness with fully configured systems, real-time data replication, and minimal RTO/RPO values, answer C is correct.

QUESTION 134

Which type of access control allows resource owners to decide who can access their resources, often through file permissions and shared access settings?

A) Discretionary Access Control
B) Role-Based Access Control
C) Mandatory Access Control
D) Rule-Based Access Control

Answer:

A

Explanation:

Discretionary Access Control (DAC) is the correct answer because it allows the owner of a resource—such as a file, folder, or database entry—to determine who is granted access. SSCP candidates must understand DAC because it is commonly used in commercial operating systems such as Windows and Linux, where users and administrators assign file permissions at their discretion.

Under DAC, the resource owner has full authority to grant, modify, or revoke access for other users. Permissions often include read, write, execute, or full control, allowing owners to customize access based on individual needs. This flexibility makes DAC easy to use and implement, especially in environments where collaboration is important.

Comparing DAC with other access control models highlights its unique characteristics. Mandatory Access Control (MAC) enforces strict policies set by administrators and does not allow resource owners to make decisions. Role-Based Access Control (RBAC) assigns permissions based on job roles, not individual discretion. Rule-Based Access Control uses predefined rules, often tied to system events or contextual factors.

One major drawback of DAC is that it is vulnerable to insider threats. Because users can share access at their discretion, they may unintentionally grant access to unauthorized users or expose sensitive data. This flexibility can lead to privilege creep, misconfigurations, or accidental data leaks. Malware and attackers can also exploit DAC environments—if malware compromises a user with high permissions, it inherits those permissions automatically.

Despite these weaknesses, DAC remains widely used because of its ease of management and compatibility with collaborative workflows. Tools such as Access Control Lists (ACLs) and shared folder permissions are practical examples of DAC in real environments.

Because DAC gives resource owners the authority to control who can access their resources, answer A is correct.

QUESTION 135

Which type of security testing involves simulating real-world attacks to evaluate an organization’s defenses, focusing on exploitation rather than simple identification of vulnerabilities?

A) Vulnerability Scanning
B) Penetration Testing
C) Code Review
D) Security Audit

Answer:

B

Explanation:

Penetration testing is the correct answer because it goes beyond identifying vulnerabilities and actively attempts to exploit them to determine whether an attacker could gain unauthorized access, escalate privileges, or compromise systems. SSCP candidates must understand penetration testing because it provides a realistic evaluation of an organization’s security posture, identifying weaknesses that are not always clear from automated scans or configuration reviews.

Penetration testing typically involves phases such as reconnaissance, scanning, exploitation, privilege escalation, lateral movement, data exfiltration simulation, and reporting. The goal is to replicate the techniques of attackers—using ethical and controlled methods—to reveal what damage could occur in a real attack scenario.

Comparing penetration testing with vulnerability scanning makes the distinction clear. Vulnerability scanning identifies weaknesses but does not exploit them. Code review focuses on analyzing source code for flaws but does not simulate external attacks. A security audit assesses compliance with policies or standards, not real-world adversarial behavior.

Penetration testing provides insight into chained vulnerabilities—situations where multiple seemingly low-risk weaknesses combine into a severe threat. Automated scanners cannot typically identify such complex attack paths.

Because penetration testing uniquely simulates real attacks and tests exploitability rather than just detecting weaknesses, answer B is correct.

QUESTION 136

Which network security device actively monitors traffic for signs of malicious activity and can automatically take action to block or prevent attacks?

A) Firewall
B) Intrusion Prevention System
C) Proxy Server
D) Switch

Answer:

B

Explanation:

An Intrusion Prevention System (IPS) is the correct answer because it not only detects malicious activity but also takes automated action to block, drop, or otherwise prevent harmful traffic from reaching its target. SSCP candidates must understand IPS functionality because it plays a critical role in modern layered defense systems, protecting networks from exploits, malware, unauthorized access attempts, and protocol anomalies.

An IPS works inline—meaning all traffic flows through it—and examines packets for known signatures, behavioral patterns, or anomalies. If the IPS detects malicious behavior, it can block the packet, reset the connection, quarantine the affected host, or trigger further security actions. This differentiates it from an Intrusion Detection System (IDS), which only monitors and alerts without taking action.

Comparing IPS with the incorrect options highlights its unique capabilities. A firewall enforces access rules but does not perform deep packet inspection or behavioral analysis at the same level as an IPS. A proxy server handles content filtering and traffic forwarding but is not designed for active threat prevention. A switch forwards traffic at Layer 2 and does not perform security inspection.

IPS technologies rely on multiple detection methods, including signature-based, anomaly-based, and protocol analysis. This allows them to detect known threats as well as new or suspicious behavior. They are commonly deployed behind firewalls or at network chokepoints, providing real-time protection.

Because an IPS actively analyzes and blocks malicious traffic, functioning as a proactive security layer, answer B is correct.

QUESTION 137

Which data classification level is typically used for information that, if disclosed, would cause serious damage to an organization and must be tightly restricted?

A) Public
B) Confidential
C) Internal Use
D) Unclassified

Answer:

B

Explanation:

Confidential is the correct answer because it refers to sensitive information that, if exposed, could cause serious harm to an organization. SSCP candidates must understand classification levels because they guide the required protection mechanisms, access controls, and handling procedures to safeguard data.

Public information is intended for wide release and does not require protection. Internal use refers to information restricted within the organization but not considered highly sensitive. Unclassified is a term used primarily in government contexts to denote information that does not require protection. Confidential information, however, includes customer records, financial data, proprietary research, and sensitive business plans. Unauthorized disclosure could lead to reputational damage, financial loss, legal consequences, or competitive disadvantage.

Organizations often use classifications such as public, internal, confidential, and restricted. Confidential is commonly one of the higher tiers, requiring encryption, strict access control, monitoring, and audit logging.

Because confidential information requires tight restrictions and its disclosure would cause serious damage, answer B is correct.

QUESTION 138

Which encryption method uses a pair of mathematically related keys, one for encryption and one for decryption?

A) Symmetric Encryption
B) Asymmetric Encryption
C) Hashing
D) Steganography

Answer:

B

Explanation:

Asymmetric encryption is the correct answer because it uses two separate keys: a public key for encryption and a private key for decryption. SSCP candidates must understand asymmetric encryption because it forms the foundation of secure communications on the internet, including SSL/TLS, digital signatures, email encryption, and certificate-based authentication.

Symmetric encryption uses one key for both encryption and decryption, making key distribution a major challenge. Hashing is a one-way process that cannot be reversed, used for integrity but not encryption. Steganography hides data within other media and is not a form of encryption.

Asymmetric encryption solves the key distribution problem by allowing users to share public keys openly while keeping private keys secure. This enables secure key exchange, authentication, and non-repudiation. Common algorithms include RSA, ECC, and Diffie-Hellman.

Because asymmetric encryption uniquely relies on key pairs, answer B is correct.

QUESTION 139

Which type of malware disguises itself as legitimate software to trick users into executing it, often causing infection or unauthorized system access?

A) Worm
B) Trojan
C) Rootkit
D) Botnet

Answer:

B

Explanation:

A Trojan is the correct answer because it masquerades as legitimate software—such as an installer, utility, or email attachment—to deceive users into running it. Once executed, it delivers malicious payloads, installs backdoors, steals data, or facilitates further compromise. SSCP candidates must understand Trojans because human deception and social engineering play major roles in their spread.

Unlike worms, Trojans do not self-replicate. Rootkits hide malicious activity but do not disguise themselves as legitimate software. A botnet is a network of infected systems, not a method of infection. Trojans rely on user interaction, such as downloading cracked software, clicking malicious links, or opening fake documents.

Attackers often bundle Trojans with phishing emails, freeware, or fake updates. Once installed, a Trojan may download additional malware, log keystrokes, steal credentials, or provide remote access to attackers. Trojans pose significant risks because they exploit trust, making technical defenses less effective unless supported by strong security awareness.

Because Trojans deceive users by appearing legitimate, yet execute malicious functions, answer B is correct.

QUESTION 140

Which business continuity document outlines step-by-step actions required to restore normal operations following a disruptive incident?

A) Asset Inventory
B) Disaster Recovery Plan
C) Service-Level Agreement
D) Network Diagram

Answer:

B

Explanation:

A disaster recovery plan (DRP) is the correct answer because it provides detailed, step-by-step procedures to restore IT systems, applications, data, and infrastructure after a disruptive event. SSCP candidates must understand DRPs thoroughly because they ensure continuity of operations after disasters such as cyberattacks, system failures, natural disasters, or human error.

An asset inventory documents hardware and software but does not guide recovery. A service-level agreement defines expected service performance but provides no restoration steps. A network diagram shows system topology but not recovery procedures.

A DRP includes recovery teams, communication protocols, restoration sequences, backup procedures, RPO/RTO targets, alternate site activation, and validation steps. The plan ensures that staff know exactly what to do during an emergency, minimizing downtime and preventing confusion. Testing and updating the DRP ensures it remains effective.

Because a DRP specifically outlines actions to restore operational capacity after disruption, answer B is correct.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!