CompTIA CS0-003 CySA+ Exam Dumps and Practice Test Questions Set 2 Q 21-40

Visit here for our full CompTIA CS0-003 exam dumps and practice test questions.

Question 21

A security analyst observes repeated alerts from the SIEM showing that an internal workstation is initiating outbound connections to multiple random domains with high entropy values. The domains appear algorithmically generated, and traffic analysis shows periodic beaconing every 90 seconds. Which of the following should the analyst prioritize FIRST to confirm the root cause of the activity?

A) Review DNS query logs for indicators of DGA behavior
B) Reimage the workstation immediately
C) Block all outbound DNS traffic at the firewall
D) Force a password reset for the associated workstation user

Answer: A

Explanation:

When a workstation begins reaching out to random, high-entropy domains at consistent intervals, this behavior is strongly associated with malware leveraging a domain generation algorithm. A DGA enables threat actors to produce thousands of unpredictable domains rapidly, allowing them to maintain resilient command-and-control channels even if defenders block some addresses. The very first priority for the analyst is to confirm that the DNS queries indeed exhibit algorithmic characteristics. This is why reviewing DNS query logs is the correct initial step, as these logs contain a wealth of forensic evidence needed to validate the presence of DGA-based malware.

A reviewing DNS query logs allows analysts to identify suspicious query frequencies, unusual domain lengths, statistical irregularities, repetitive querying intervals, and domain entropy scores. DNS logs also reveal which DNS servers were contacted, whether requests were successful, and whether responses were unusually structured. Patterns within the logs often highlight beaconing intervals, encoded DNS payloads, and other indicators that cannot be recognized merely by looking at endpoint behavior.

B reimaging the workstation is an aggressive remediation step that analysts should never perform before confirming the nature of the threat and preserving evidence. Reimaging prematurely destroys volatile information that may help investigators understand lateral movement, command channels, and the initial infection vector.

C blocking all outbound DNS traffic is not feasible because it would break essential internet and internal application functionality. Organizations rely heavily on DNS for everyday operations, and disabling it would cause widespread service outages.

D forcing a password reset is not relevant to DGA malware. These infections typically do not depend on user credentials for outbound communication. Instead, they rely on embedded algorithms, backdoors, fileless components, or malicious services that continue operating regardless of password changes.

Question 22

A threat intelligence feed notifies the security team that a new malware campaign is targeting organizations by exploiting a zero-day vulnerability in a widely used database engine. The vulnerability allows remote code execution without authentication. What should the analyst recommend FIRST to minimize the organization’s exposure?

A) Deploy temporary compensating controls such as IPS signatures
B) Immediately migrate all databases to a new vendor
C) Shut down every production database server
D) Disable all user accounts until the vendor releases a patch

Answer: A

Explanation:

Zero-day vulnerabilities are particularly dangerous because they allow attackers to exploit systems before the vendor can release a patch. When these vulnerabilities affect a widely used database engine, the attack surface becomes enormous. The analyst must require immediate risk reduction by applying compensating controls. Deploying intrusion prevention signatures, implementing virtual patching, tightening firewall rules, and monitoring anomalous database requests absorbs the initial shock of the zero-day without disrupting operational continuity.

A deploying compensating controls is the most strategic initial action. IPS systems can detect patterns associated with the exploit even if a patch is not yet available. For example, anomalous SQL queries, malformed packets, suspicious memory operations, or unauthorized function calls can all be identified and blocked. Web application firewalls can also provide immediate protection by filtering potentially dangerous database requests.

B migrating databases is impractical, highly disruptive, and cannot be executed quickly enough during an active threat campaign. Such a migration requires architectural redesign, data validation, resource allocation, and lengthy downtime — none of which are viable for immediate mitigation.

C shutting down all production servers causes catastrophic business impact. Essential services dependent on the database would become unavailable. This is a drastic action reserved only for imminent, confirmed compromise where containment is impossible by other means.

D disabling user accounts is completely irrelevant to this type of vulnerability. The zero-day exploit requires no authentication, meaning attackers do not even need credentials. Restricting user accounts would not reduce the exposure at all.

Question 23

A cybersecurity analyst detects that a system has been compromised through a malicious browser extension that injects scripts into web pages, stealing authentication tokens and session cookies. The malware also modifies browser configuration files to maintain persistence. What should the analyst perform FIRST to ensure the threat is contained?

A) Remove the system from the network to stop data theft
B) Delete the browser extension and clear all cookies
C) Force a company-wide password reset immediately
D) Reinstall the browser and restore default settings

Answer: A

Explanation:

A malicious browser extension capable of stealing session cookies and authentication tokens is extremely dangerous because it allows attackers to bypass authentication entirely. They can hijack user sessions without credentials and gain unauthorized access to corporate applications. In such cases, containment is the top priority.

A removing the system from the network ensures immediate stoppage of further data exfiltration. When the endpoint is isolated, the malicious extension cannot communicate outward, cannot send harvested session cookies, and cannot receive additional commands. Analysts can then proceed with controlled forensic analysis without interference from active network traffic.

B deleting the extension is important but not the first action. If the system remains active on the network, the malware can continue transmitting stolen data even while removal attempts are underway. Additionally, the extension may have secondary payloads that execute autonomously regardless of deletion.

C forcing a company-wide password reset is premature because stolen session cookies bypass passwords entirely. Attackers may continue using stolen tokens for as long as they remain valid. This step may also create unnecessary operational strain without addressing the root issue.

D reinstalling the browser is insufficient because a compromised endpoint can reinfect the browser after reinstalling due to persistent malware residing in registry keys, scheduled tasks, or local storage mechanisms.

Question 24

During a routine vulnerability assessment, an analyst discovers several servers still running outdated software versions with known privilege-escalation flaws. The patching team explains that the systems contain legacy applications that could break if patched. What should the analyst recommend to reduce the security risk while minimizing operational disruption?

A) Implement strict network segmentation and restrict access pathways
B) Shut down the legacy systems until the applications are replaced
C) Apply all patches immediately despite the application risks
D) Disable every user account associated with the legacy systems

Answer: A

Explanation:

Legacy systems with unpatchable vulnerabilities are a recurring challenge in cybersecurity. Outdated software may support critical business functions, but known privilege-escalation vulnerabilities expose them to serious exploitation. Analysts must prioritize risk reduction while preserving operational continuity. The best approach is to apply network segmentation and access restrictions.

A segmentation limits who and what can communicate with the vulnerable systems. By isolating the servers into tightly controlled network zones, the organization reduces lateral movement opportunities. Limiting administrative access, applying ACLs, restricting ports, and implementing firewall rules create layered barriers that prevent threat actors from exploiting weaknesses. This approach is widely recommended in situations where patching is not immediately feasible.

B shutting down the legacy systems is often unrealistic because they typically support essential workflows. Abrupt shutdowns can cause operational delays, data unavailability, and business disruption.

C applying patches without proper application compatibility testing can break legacy applications, corrupt data, or cause system crashes. This is especially risky for custom or outdated software reliant on deprecated libraries.

D disabling accounts does not mitigate privilege-escalation vulnerabilities. Attackers exploit software flaws, not valid user credentials. Even if accounts were disabled, threat actors could still escalate privileges after gaining an initial foothold through other vectors.

Question 25

A cybersecurity analyst is reviewing authentication telemetry within the organization’s cloud identity platform. The analyst identifies that a privileged account is successfully logging in from multiple distant geographic regions within extremely short intervals. The behavioral analytics engine flags this as highly suspicious. Which security control would MOST effectively prevent this event from occurring again in the future?

A) Implement conditional access controls that enforce impossible-travel detection
B) Require users to rotate their passwords every 45 days
C) Increase the default password complexity settings
D) Disable external login capabilities for all accounts

Answer: A

Explanation:

When an identity analytics system detects a user logging in from multiple widely separated geographic locations within an unrealistic timeframe, this behavior strongly suggests credential compromise, misuse of session tokens, or unauthorized access through automated attack infrastructures. In cloud-centric environments, attackers commonly leverage proxy networks, botnets, or distributed cloud compute instances to simulate legitimate user origins. This technique is frequently used in account takeover scenarios, privilege escalation attempts, and reconnaissance operations.

The correct control to prevent this specific type of anomalous activity is A, implementing conditional access policies that include impossible-travel detection. Conditional access solutions are deeply analytics-driven and evaluate contextual indicators, such as physical distance between login sources, velocity travel computation, device health evaluations, behavioral baselines, and historical account patterns. Impossible-travel logic uses mathematical models to determine whether a human could legitimately authenticate from two separate locations within the observed timeframe. When this anomaly is identified, access can be blocked automatically or secondary verification can be triggered.

B requiring users to rotate passwords every 45 days does not meaningfully reduce the risk of real-time credential misuse. Modern security guidance discourages unnecessary password rotation because it encourages predictable password variations, creates user fatigue, and fails to prevent attackers from reusing compromised credentials that have already been obtained through phishing or credential dumping. Rotation policies address theoretical long-term risks but do not prevent immediate threats.

C increasing password complexity, while beneficial for baseline security hygiene, does not deter attackers who already possess legitimate credentials. Attackers using stolen passwords bypass complexity entirely because they use already-valid combinations. Complexity improvements are preventive, not detective or adaptive controls, and therefore insufficient in scenarios involving real-time anomalous authentication events.

D disabling external login capabilities for all accounts is operationally unrealistic. Many organizations rely on remote or cloud-based access for administrators, security teams, developers, and distributed workforce members. Shutting down external authentication would severely disrupt workflows and offer no granularity; security controls must be adaptive, not excessively restrictive.

Question 26

Which of the following host-level behaviors most strongly indicates that an attacker is launching a fileless malware attack through malicious PowerShell execution on a compromised endpoint?

A) Consistent PowerShell execution with encoded command strings and no associated file writes
B) Frequent user-initiated PowerShell scripts scheduled through authorized automation tools
C) High CPU usage caused by legitimate system patching and update operations
D) Routine PowerShell logging activity aligned with normal administrative maintenance windows

Answer: A

Explanation:

Fileless malware is increasingly used by modern adversaries because it avoids traditional detection methods that depend on identifying file signatures, executable artifacts, or persistent binaries on disk. Instead, attackers rely heavily on in-memory execution, native administrative tools, and living-off-the-land techniques. PowerShell is one of the most frequently abused utilities because it provides direct access to Windows APIs, can run completely in memory, and supports obfuscation methods such as encoding and compression.

Option A is the strongest indicator of such an attack because encoded PowerShell commands combined with the absence of file-based activity are classic hallmarks of fileless malware behavior. In these attacks, malicious payloads are often delivered through phishing documents, malicious scripts, or remote exploitation, and then executed directly in memory using techniques like PowerShell Empire, Cobalt Strike beacons, or encoded Base64 command strings. Attackers may disable logging, evade antivirus tools, and leverage PowerShell remoting to move laterally.

Option B describes legitimate activity. Administrators often schedule PowerShell tasks for automation purposes; such usage is usually predictable and well-documented. Although attackers can masquerade as automation processes, genuine automation rarely includes obfuscation or encoded commands designed to conceal intent.

Option C highlights performance degradation, which is not a reliable indicator of fileless malware. Many legitimate system activities cause high CPU usage, including patch deployments and scanning tools. Fileless techniques typically aim for stealth and may not generate obvious resource spikes.

Option D involves routine PowerShell logging that aligns with known maintenance cycles. Legitimate administrative operations produce clear audit trails and normally do not include suspicious obfuscation patterns, unexpected child processes, remote command execution, or anomalous privilege escalations.

In a CySA+ context, analysts must understand behavioral indicators of compromise rather than relying solely on signature-based alerts. Critical behaviors include encoded commands, network callbacks to known malicious command-and-control servers, unauthorized use of PowerShell modules, injection into system processes, and unexpected privilege changes. Defensive strategies include enabling robust PowerShell logging, implementing constrained language mode, controlling script execution policies, and enforcing application whitelisting.

Because fileless attacks operate in memory, the most reliable detection strategy involves monitoring anomalous command execution patterns, unusual parent-child process relationships, suspicious module loading, and encoded PowerShell activity. Therefore, option A is the most accurate and relevant indicator.

Question 27

After detecting a surge of brute-force authentication attempts targeting several critical internal services, which immediate incident-response action should the analyst prioritize first to limit attacker progress?

A) Implement account lockout and rate-limiting controls to halt ongoing brute-force attempts
B) Reset all user passwords across the organization immediately
C) Shut down authentication servers until analysis is complete
D) Notify all employees to change passwords independently

Answer: A

Explanation:

Brute-force attacks often indicate that adversaries are attempting to gain unauthorized access through rapid password-guessing. When these attacks target critical services such as VPN gateways, Active Directory systems, privileged remote consoles, or cloud identity providers, the risk of compromise escalates dramatically. The primary goal during this type of incident is to interrupt the attacker’s ability to continue attempting password combinations.

Option A, implementing account lockout policies and rate limiting, represents the correct immediate response. These controls reduce authentication attempts, frustrate automated scripts, and buy the security team time to investigate. Rate limiting slows attackers even when distributed brute-force techniques or credential-stuffing attacks are used. Account lockout thresholds prevent attackers from making infinite attempts.

Option B, resetting all passwords, is impractical and can overload help desks, disrupt operations, and worsen the situation by creating confusion. Password resets are valuable later in the response process, but not as an immediate containment measure.

Option C, shutting down authentication servers, is far too extreme and can cripple business operations. Removing essential authentication capabilities also complicates incident response by hindering monitoring, user access, and system recovery.

Option D, notifying employees to change passwords, is a low-value response during an active brute-force attack. Attackers do not rely on employees’ willingness to comply with advisories; they rely on automation and credential reuse.

Security analysts must recognize brute-force indicators: numerous failed login attempts, source IP diversity, high-velocity login failures, repetitive username cycles, and anomalies within authentication logs. Beyond immediate containment, analysts should also review threat intelligence sources, identify password reuse vulnerabilities, audit MFA enforcement, evaluate firewall and IDS logs, and strengthen identity governance.

Therefore, option A is the most effective and time-critical response to halt the attack.

Question 28

Which network behavior most strongly suggests that an attacker is performing encrypted data exfiltration through unauthorized cloud-storage services?

A) Persistent outbound TLS traffic to unfamiliar cloud domains with large, uniform data transfers
B) Occasional HTTPS browsing to well-known business-approved platforms
C) Intermittent DNS requests consistent with routine endpoint activity
D) Local file-sharing operations within a secure internal network segment

Answer: A

Explanation:

Encrypted data exfiltration is one of the most difficult attack techniques to detect because malicious traffic often blends into legitimate encrypted network flows. Attackers frequently exploit cloud-storage providers, collaboration tools, and upload APIs due to their widespread adoption and reduced likelihood of raising suspicion.

Option A is the clearest indicator of malicious exfiltration. Persistent outbound TLS connections to little-known cloud domains combined with large, uniform data transfer patterns suggest automated uploads. Attackers often compress and encrypt data before transmitting it to remote storage buckets, command-and-control servers disguised as legitimate cloud infrastructure, or anonymized storage providers. Uniform packet size, repetitive upload intervals, and continuous TLS sessions are highly suspicious.

Option B indicates normal user web browsing and is unlikely to represent malicious activity. Legitimate business-approved platforms are usually monitored and controlled.

Option C describes routine DNS behavior. DNS queries occur constantly as part of normal computer operations and do not inherently indicate exfiltration.

Option D is internal file sharing, which does not involve outbound encrypted transfers.

Analysts must identify indicators such as anomalous TLS fingerprints, irregular server certificates, upload size anomalies, traffic outside business hours, and communication with unapproved cloud providers. Detecting encrypted exfiltration often requires baselining normal behavior, analyzing flow logs, evaluating proxy telemetry, using DLP tools, and integrating cloud-native security analytics.

Therefore, option A represents the strongest signal of encrypted exfiltration.

Question 29

Which forensic artifact provides the clearest evidence of persistence created by an attacker using registry modifications and scheduled tasks on a compromised system?

A) A registry Run key pointing to an unauthorized executable launched at user logon
B) Standard event logs showing normal system boot behavior
C) A list of legitimate background services activated during patch deployment
D) Routine software inventory metadata collected by IT management tools

Answer: A

Explanation:

Persistence mechanisms allow attackers to maintain control even after reboots, logoffs, or partial remediation. Windows provides numerous persistence opportunities, including registry Run keys, scheduled tasks, service installation, WMI event subscriptions, and startup folder manipulation.

Option A is the most direct and concrete artifact: a registry Run key referencing an unknown executable. Attackers frequently use Run and RunOnce keys because they execute automatically when a user logs in. When combined with scheduled tasks, these keys offer redundancy and ensure long-term persistence. Registry artifacts also store timestamps, file paths, and modification details critical to forensic investigation.

Option B consists of generic event logs that may or may not reveal anomalies. Normal boot logs provide limited insight into unauthorized persistence.

Option C describes legitimate background processes activated during patching and does not necessarily correlate with attacker activity.

Option D involves inventory metadata, which rarely includes detailed forensic evidence.

In investigations, analysts must inspect registry hives, scheduled task XML files, event logs for task creation, and file timestamps. Malicious artifacts often stand out due to irregular naming conventions, suspicious file paths, random strings, or unsigned executables. Identifying unauthorized persistence is crucial for eradicating an attacker’s foothold and preventing re-infection.

Given these factors, option A provides the clearest evidence of malicious persistence.

Question 30

Which security control most effectively prevents attackers from exploiting stolen session tokens to gain unauthorized access to sensitive cloud applications?

A) Enforce continuous session validation and automatic token revocation when anomalies are detected
B) Require password rotation for all accounts every 30 days
C) Block all remote access for standard users
D) Increase password complexity rules across the organization

Answer: A

Explanation:

Session-token theft enables attackers to bypass authentication entirely by replaying valid tokens captured through phishing, malware, or network interception. In cloud environments, attackers use these tokens to impersonate users, escalate privileges, and access confidential resources.

Option A is the most effective defense because continuous session validation monitors session behavior in real time, identifies anomalies such as impossible travel, unusual device fingerprints, abnormal access times, or suspicious resource usage, and revokes tokens immediately. This approach aligns with zero-trust principles and adaptive identity protection.

Option B, password rotation, does not invalidate active stolen tokens. Attackers often exploit tokens without needing passwords at all.

Option C, blocking remote access, is unrealistic and disruptive for modern cloud-based organizations.

Option D, password complexity improvements, also fail to affect active token misuse.

Cloud security best practices emphasize short token lifetimes, conditional access, device compliance checks, and behavioral analytics to prevent token replay attacks. Continuous validation ensures tokens remain trustworthy throughout the session, not just at login.

Question 31

Which of the following indicators most strongly suggests that an attacker is attempting lateral movement within a Windows domain environment using compromised administrative credentials?

A) Unusual remote PowerShell sessions initiated from low-privilege user workstations
B) Routine file transfers between authorized internal servers during maintenance
C) Frequent outbound HTTPS traffic to approved cloud platforms
D) Normal SMB file-sharing activities among standard employees

Answer: A

Explanation:

Lateral movement occurs when an attacker expands their foothold across a compromised network by leveraging stolen credentials, abusing trust relationships, and exploiting remote management tools. In a Windows domain environment, lateral movement frequently involves remote PowerShell execution, remote WMI queries, Remote Desktop Protocol attempts, Pass-the-Hash techniques, and unauthorized Kerberos ticket usage. Analysts must identify subtle indicators of malicious propagation because these actions often blend seamlessly with legitimate administrative activity.

Option A is the strongest indicator of lateral movement because unusual remote PowerShell sessions originating from low-privilege workstations are highly anomalous. Standard users typically do not initiate remote command execution across servers. Attackers, however, often rely on PowerShell because it is built-in, scriptable, and capable of executing commands in memory. Malicious actors who obtain administrative credentials frequently pivot across hosts by starting remote PowerShell sessions to enumerate systems, deploy payloads, modify configurations, or escalate privileges further. Lateral movement activity frequently appears in logs as unexpected WinRM authentication, odd PowerShell process trees, or credential use from systems where privileged users do not normally log in.

Option B reflects legitimate server-to-server file transfers that occur during patching or backup cycles. These operations are usually scheduled, predictable, and documented in change-control records. They do not typically indicate malicious credential use.

Option C is normal behavior. Outbound HTTPS traffic to approved cloud environments aligns with everyday web, email, and productivity operations. Without additional anomalies, this traffic is insufficient to indicate lateral movement.

Option D describes standard SMB activity among employees. Routine file sharing, printing, and access to department shares are normal within enterprise environments and rarely indicate covert traversal.

To detect lateral movement, analysts must examine authentication logs, remote execution logs, Windows event IDs relating to user logon types, Kerberos ticket issuance, PowerShell logs, and endpoint telemetry. Unusual parent-child process relationships and execution from unexpected devices are major red flags. Correlating movement patterns across multiple hosts helps identify coordinated malicious activity.

Thus, option A is the correct answer because anomalous remote PowerShell activity from low-privilege endpoints signals unauthorized propagation using compromised credentials.

Question 32

A security analyst observes repeated login failures to a privileged Linux account from several internal IP addresses. The attempts appear sequential and automated. What should the analyst prioritize FIRST to mitigate the threat?

A) Implement SSH rate limiting and temporary IP blocking for failed attempts
B) Delete the Linux account being targeted
C) Disable SSH access entirely across the network
D) Ask employees to manually reset their account passwords

Answer: A

Explanation:

Sequential automated login attempts targeting a privileged Linux account indicate a brute-force authentication attack originating from internal systems, possibly from compromised hosts or misconfigured tools. The first priority in such scenarios is to halt the automated attempts immediately, preventing attackers from eventually guessing the correct credentials or triggering unintended service disruptions.

Option A, implementing SSH rate limiting and blocking offending IP addresses, is the most effective immediate response. Rate limiting reduces the velocity of repeated authentication attempts, making automated attacks ineffective. Tools such as fail2ban, firewall rules, and SSH configuration adjustments can rapidly limit access. Blocking suspicious internal IPs helps contain compromised endpoints, preventing further exploitation. These steps maintain system availability while providing analysts time to investigate deeper.

Option B, deleting the targeted account, may break dependencies or shutdown critical system processes tied to that account. Removing privileged accounts without analysis risks disrupting operations and obscuring evidence. Additionally, the root issue—the active brute-force attack—remains unsolved.

Option C, disabling SSH network-wide, is extremely disruptive. SSH is essential for remote management, automation, monitoring, and administrative operations. Taking SSH offline blinds the security team and prevents legitimate maintenance tasks.

Option D, asking employees to reset passwords, is not relevant because typical users cannot influence privileged Linux service accounts. This action also does nothing to stop the ongoing brute-force attempts.

From a CySA+ perspective, detecting internal brute-force activity requires reviewing authentication logs, correlating timestamps across multiple hosts, evaluating SIEM alerts, and identifying compromised systems. After initial containment, analysts should examine whether internal IPs are infected, misconfigured, or being used maliciously. Implementing multi-factor authentication, enforcing strong SSH key usage, and hardening daemon configurations significantly reduces the likelihood of future brute-force attacks.

Thus, option A is the correct first action because it stops the active threat while preserving operational continuity.

Question 33

Which network pattern most likely indicates that an attacker is establishing covert command-and-control communication using domain fronting?

A) TLS requests to legitimate content delivery networks with hidden redirect patterns
B) DNS queries to internal, business-approved domains
C) Routine HTTPS traffic to corporate email services
D) SMB file transfers between authorized internal hosts

Answer: A

Explanation:

Domain fronting is a stealth technique where attackers disguise malicious command-and-control traffic by routing it through legitimate content delivery networks or cloud service providers. The attacker uses different hostnames in various layers of the TLS handshake, allowing harmful traffic to appear as benign requests to well-known infrastructure. This makes detection difficult because the outer domain looks normal, but the inner request communicates with an attacker-controlled endpoint.

Option A is the strongest indicator. TLS requests directed at legitimate CDNs but containing hidden redirect paths, anomalous host headers, or mismatched SNI values strongly suggest domain fronting. Adversaries exploit this behavior to evade firewalls, bypass reputation-based security controls, and hide their true destination. Analysts often detect this through TLS inspection, metadata analysis, certificate fingerprint anomalies, or unusual request routing within CDN logs.

Option B represents normal DNS behavior. Internal domains are rarely used for covert external command-and-control pathways, and legitimate internal DNS traffic lacks anomalous redirect patterns.

Option C is ordinary HTTPS communication to standard enterprise services. Such traffic is common and typically uses predictable server fingerprints and consistent request sequences.

Option D describes standard SMB file-sharing activity that remains within the internal network and does not involve external CDNs or bypass techniques.

CySA+ analysts must understand advanced evasion strategies, including domain fronting, protocol smuggling, encrypted tunnels, host header manipulation, and abuse of content delivery networks. Detecting domain fronting requires examining TLS handshake details, comparing SNI values to hostnames requested, analyzing outbound session patterns, and looking for inconsistencies between packet metadata and application-layer headers.

Thus, option A is the correct indicator of domain-fronted C2 communication.

Question 34

A cybersecurity analyst discovers multiple hosts communicating with an external IP at regular intervals of exactly 30 seconds. The traffic consists of small, uniform encrypted packets. What does this MOST likely indicate?

A) Beaconing from malware to a command-and-control server
B) Standard heartbeat checks from a network monitoring tool
C) Routine synchronization to a corporate time-server
D) Normal encrypted user browsing traffic

Answer: A

Explanation:

When multiple systems exhibit repetitive outbound communication at fixed intervals, especially involving small encrypted packets, it is highly indicative of beaconing behavior. Beaconing is a common technique used by malware to maintain communication with a remote command-and-control server. The short, predictable timing intervals combined with minimal encrypted payloads typically represent status updates or instructions being transmitted secretly.

Option A represents the most likely explanation. Beaconing activity often appears uniform, low-bandwidth, and periodic. Attackers design beacon packets to be small enough to evade detection while maintaining persistent control over compromised hosts. These communications may include check-ins, tasking requests, or responses from malware implants.

Option B refers to network monitoring heartbeat traffic, but such tools usually communicate with internal management servers, not external unknown IPs. Their intervals also vary depending on configuration and rarely show identical cross-host patterns that align with malicious beaconing.

Option C relates to time synchronization, but NTP servers do not rely on encrypted periodic traffic every 30 seconds and typically operate on a different protocol altogether.

Option D is incorrect because normal browsing traffic is highly variable in both size and timing. User activity does not produce consistent, repetitive packet sequences.

Beaconing detection is crucial for CySA+ candidates. Analysts review NetFlow data, firewall logs, packet captures, endpoint telemetry, and SIEM alerts to identify command-and-control patterns. Features such as exact timing intervals, uniform packet lengths, encrypted communications to unknown destinations, and multi-host synchronization patterns all suggest malware coordination.

Thus, option A is the correct interpretation.

Question 35

Which control MOST effectively prevents attackers from reusing captured Kerberos tickets during a Pass-the-Ticket attack?

A) Enforce short Kerberos ticket lifetimes and regular key rotation
B) Require users to reset passwords monthly
C) Disable all domain accounts during investigations
D) Increase password complexity requirements

Answer: A

Explanation:

Pass-the-Ticket attacks involve adversaries stealing Kerberos tickets to impersonate legitimate users and escalate privileges across the network. Attackers use captured TGTs or service tickets to access sensitive systems without knowing actual passwords. The most effective mitigation is reducing ticket longevity and ensuring regular rotation of encryption keys that protect Kerberos communications.

Option A is correct because shorter ticket lifetimes limit the usable window for attackers. Regular key rotation invalidates previously stolen tickets, forcing attackers to continuously reacquire credentials, which increases detection risk. Enforcing strong Kerberos policies such as constrained delegation and strict timestamp requirements further mitigates ticket replay attacks.

Option B does not invalidate captured tickets. Even with password resets, stolen tickets remain active until expiration.

Option C is overly destructive and impractical. Disabling all accounts halts critical operations and does not constitute a preventative control.

Option D improves baseline password security but does nothing to prevent attackers from misusing tickets already stolen.

CySA+ analysts must understand Kerberos architecture, ticket encryption, replay detection, and how adversaries exploit authentication weaknesses. Proper ticket management and key hygiene are central defenses against Pass-the-Ticket exploitation.

Thus, option A is the strongest preventative measure.

Question 36

Which of the following BEST describes the primary purpose of threat intelligence feeds in a security operations environment?

A) Provide up-to-date indicators of compromise to improve detection and response
B) Store historical log data for regulatory compliance
C) Serve as backup for endpoint configuration files
D) Replace vulnerability scanning in network assessments

Answer: A

Explanation:

Threat intelligence feeds supply security operations teams with timely, actionable information about emerging threats, including indicators of compromise (IOCs), malicious IP addresses, domain names, hashes, and tactics, techniques, and procedures (TTPs) used by attackers. These feeds enhance detection and incident response capabilities by correlating observed network events with known threats. Analysts can configure SIEM platforms, intrusion detection systems, and endpoint detection tools to automatically match traffic or files against intelligence data, enabling faster containment and remediation.

Option A is correct because the core function of threat intelligence is proactive detection. It allows security teams to anticipate attacker behavior, enrich alerts, and reduce dwell time during incidents. The feeds can be external (industry consortiums, security vendors) or internal (custom IOCs generated from prior incidents).

Option B is related to log management and compliance, but historical logs alone do not provide proactive threat identification. They are primarily useful for forensic investigation after an incident has occurred.

Option C is inaccurate. Backups of configuration files are part of system recovery planning, not threat intelligence.

Option D is misleading because vulnerability scanning identifies weaknesses in systems, while threat intelligence provides context for ongoing attacks and adversary behaviors. They complement each other rather than replace one another.

Effective use of threat intelligence involves validating feed accuracy, integrating data into SIEM alerts, correlating multiple threat sources, and prioritizing alerts based on relevance to the organization’s environment. Analysts can detect advanced persistent threats (APTs), malware campaigns, and phishing attempts by mapping feed data to internal network events.

Thus, option A is the most accurate representation of threat intelligence feeds’ primary purpose.

Question 37

During a security assessment, an analyst notices repeated ARP replies from an unknown host claiming the IP of the default gateway. Which attack is MOST likely occurring?

A) ARP spoofing (ARP poisoning)
B) SYN flood
C) DNS amplification
D) Cross-site scripting (XSS)

Answer: A

Explanation:

ARP spoofing, or ARP poisoning, is a network-based attack in which an attacker sends forged ARP messages over a local area network. The goal is to associate the attacker’s MAC address with the IP address of a trusted device, usually the default gateway, to intercept, modify, or block traffic. This can enable man-in-the-middle (MITM) attacks, data exfiltration, or session hijacking.

Option A is correct. Repeated ARP replies from an unfamiliar host claiming the gateway IP indicate that the attacker is attempting to insert themselves into the network path. Analysts can detect such anomalies using ARP monitoring tools, checking MAC-to-IP mappings, and validating network topology. Implementing dynamic ARP inspection, port security, and VLAN segmentation are effective mitigations.

Option B (SYN flood) involves overloading a target system’s TCP stack with half-open connections, leading to denial of service. It does not involve ARP replies or impersonating network devices.

Option C (DNS amplification) is an external DDoS attack exploiting open DNS resolvers. It also does not involve local network ARP traffic.

Option D (XSS) targets web applications by injecting malicious scripts into users’ browsers. It is unrelated to ARP and local network communication.

ARP poisoning detection involves analyzing ARP tables, monitoring duplicate MAC-IP mappings, using intrusion detection signatures for anomalous ARP behavior, and deploying endpoint monitoring. Quick mitigation can involve removing the attacker from the network, regenerating ARP tables, and implementing strict network access controls.

Thus, option A accurately describes the observed network attack.

Question 38

Which type of log analysis BEST helps identify an attacker performing privilege escalation on a Windows server?

A) Event logs for account management and security privilege changes
B) Web server access logs showing 200 OK responses
C) DNS query logs with standard domain lookups
D) Firewall logs showing allowed outbound traffic

Answer: A

Explanation:

Privilege escalation occurs when an attacker gains higher access rights than originally granted, potentially reaching administrative or root-level permissions. On Windows servers, system and security event logs capture changes to account permissions, group membership modifications, user logon attempts, and administrative actions.

Option A is correct because examining logs related to account management, such as Event IDs 4672 (special privileges assigned), 4728 (group membership added), and 4648 (explicit credential use), allows analysts to identify unauthorized privilege escalation. By correlating these events with logon sessions, source IP addresses, and unusual timeframes, a security analyst can detect suspicious activity.

Option B (web server access logs) is insufficient because privilege escalation is primarily a system-level event, not observable via web application response codes.

Option C (DNS query logs) tracks domain resolution and is rarely helpful for identifying unauthorized privilege elevation.

Option D (firewall logs) shows traffic flow but does not provide insight into user-level permission changes.

Effective analysis involves establishing baselines for normal administrative activity, flagging abnormal privilege assignments, correlating events across servers, and maintaining audit trails for forensic investigation. Alerting mechanisms can trigger when non-administrative accounts gain elevated rights unexpectedly.

Thus, option A is the correct approach for detecting Windows privilege escalation.

Question 39

An analyst observes a large volume of ICMP echo requests from multiple hosts to a single internal system. What is the MOST likely attack scenario?

A) ICMP flood DoS attack
B) SQL injection attack
C) Credential stuffing attack
D) Malicious email phishing campaign

Answer: A

Explanation:

An ICMP flood attack, a type of Denial-of-Service (DoS) attack, involves overwhelming a target system with Internet Control Message Protocol echo requests (pings). When multiple hosts generate high volumes of ICMP traffic to a single system, it can saturate bandwidth, exhaust system resources, and lead to service disruption.

Option A is correct because repeated ICMP echo requests from multiple sources targeting one system fit the behavior of a volumetric DoS attack. Analysts monitor network traffic, identify abnormal packet rates, and implement rate-limiting or filtering at firewalls or routers to mitigate the impact. ICMP flood detection often relies on threshold-based alerts and traffic anomaly analysis.

Option B (SQL injection) involves manipulating database queries through web applications. It is unrelated to ICMP traffic patterns.

Option C (credential stuffing) targets login systems using stolen credentials, typically through HTTP POST requests, not ICMP packets.

Option D (phishing) is an email-based social engineering technique and does not involve ICMP traffic.

Analysts should investigate the source IP addresses, check for compromised hosts generating ICMP traffic, correlate with firewall and IDS logs, and deploy rate-limiting rules to prevent further disruption. Maintaining network segmentation and monitoring for unusual patterns ensures detection of both volumetric and stealthy attacks.

Thus, option A is the correct identification of this network behavior.

Question 40

Which technique MOST effectively reduces the attack surface of a cloud-based application hosting sensitive data?

A) Implementing strict IAM policies, least privilege, and multi-factor authentication
B) Increasing the storage capacity of the application server
C) Monitoring website traffic using analytics tools only
D) Rewriting all application code in a new programming language

Answer: A

Explanation:

Attack surface reduction involves minimizing the number of entry points an attacker can exploit. In cloud environments, effective measures include enforcing strict Identity and Access Management (IAM) policies, assigning least privilege to accounts, and requiring multi-factor authentication (MFA) for access to sensitive resources. These measures prevent unauthorized access even if credentials are compromised and ensure that users or services only have permissions necessary for their role.

Option A is correct because it directly targets common attack vectors in cloud applications, such as compromised accounts or privilege abuse. Least privilege ensures that excessive permissions are not available to attackers, MFA adds a layer of authentication, and strict IAM policies define clear access boundaries.

Option B (increasing storage) is unrelated to security. It affects performance and capacity but does not mitigate exploitation risks.

Option C (traffic monitoring) provides insight but does not actively prevent exploitation. It is a detection control rather than a preventative measure.

Option D (rewriting code) is resource-intensive and unnecessary for attack surface reduction unless specific vulnerabilities exist. Security hardening, patching, and access controls are more efficient methods.

Analysts must adopt a combination of proactive measures (IAM, MFA, network segmentation, secure coding) and reactive monitoring (SIEM, log correlation, anomaly detection) to comprehensively protect cloud-based sensitive data. Regular access reviews, automated alerts for privilege changes, and adherence to security frameworks ensure minimized exposure.

Thus, option A is the most effective technique for reducing the attack surface of cloud-hosted applications.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!