CompTIA 220-1102 A+ Certification Exam: Core 2 Exam Dumps and Practice Test Questions Set 1 Q 1-20

Click here to access our full set of CompTIA 220-1102 exam dumps and practice tests.

Question 1

A technician needs to secure a wireless network using strong encryption and mutual authentication. Which protocol should be deployed?

A) WPA2-Enterprise
B) WEP
C) WPA
D) Open

Answer: A

Explanation:

Securing a wireless network in a modern enterprise environment requires both strong encryption and reliable authentication to prevent unauthorized access and protect sensitive data. WPA2-Enterprise is the optimal solution because it uses the Advanced Encryption Standard (AES) to encrypt wireless traffic while providing mutual authentication through a RADIUS server. This combination ensures that each device connecting to the network is verified before it gains access and that transmitted data remains confidential and protected against eavesdropping. Option B) WEP is outdated and vulnerable due to weak encryption and simple key management. C) WPA improves security compared to WEP but still relies on less secure TKIP encryption, which is susceptible to attacks. D) Open networks provide no authentication or encryption and are extremely risky in enterprise environments.

WPA2-Enterprise leverages the 802.1X standard to enforce authentication for every device attempting to connect to the network. When a client device tries to join, it acts as a supplicant and provides credentials such as a username/password or digital certificate. The authentication process occurs through a RADIUS server, which verifies credentials against a centralized directory, such as Active Directory. Once verified, the network dynamically grants access and can assign the device to a specific VLAN or policy based on the user’s role or department. This level of control enhances both security and operational efficiency.

AES encryption used in WPA2-Enterprise ensures that data transmitted over the wireless network cannot be easily decrypted by attackers, protecting sensitive corporate information from interception. In addition, mutual authentication ensures that clients are connecting to a legitimate access point, preventing man-in-the-middle attacks often seen in rogue access point scenarios. WPA2-Enterprise is widely supported across modern access points, laptops, and mobile devices, making it a scalable solution for organizations of any size.

By implementing WPA2-Enterprise, organizations ensure that wireless communications remain secure, network resources are only accessible to authorized users, and compliance requirements are maintained. This approach strengthens the overall network security posture while minimizing the risk of breaches or unauthorized access. Proper configuration, periodic auditing, and monitoring of authentication logs further enhance protection against emerging threats.

Question 2

A company wants to prevent malware infections by restricting applications from running on endpoint devices. Which security control should be implemented?

A) Application whitelisting
B) Antivirus software
C) Firewall rules
D) VPN

Answer: A

Explanation:

Preventing malware infections on endpoint devices requires proactive control mechanisms that restrict unauthorized software execution. Application whitelisting is the most effective solution because it allows only pre-approved programs to run while blocking all others by default. This approach prevents malware, ransomware, and other malicious software from executing, even if the file is introduced via email, USB drive, or web download. Option B) Antivirus software detects and removes threats but is reactive and may not detect zero-day malware. C) Firewall rules control network traffic but do not prevent unauthorized applications from running locally. D) VPNs encrypt network traffic but have no impact on local application execution or malware prevention.

Application whitelisting is configured by creating a list of trusted executables, scripts, and software that are permitted to run on endpoints. When a user attempts to run an unapproved application, the system blocks execution and logs the attempt. This reduces the attack surface and prevents inadvertent execution of malicious code. In enterprise environments, administrators can deploy centralized whitelisting policies through endpoint management systems, ensuring consistency across all devices. Updates to the whitelist require administrative approval, maintaining control over which software can be installed or executed.

Whitelisting is particularly effective against sophisticated threats, including ransomware and trojans that traditional antivirus may not detect. While antivirus relies on signature databases and heuristics, whitelisting enforces strict execution policies, blocking unknown or unauthorized applications immediately. Combining application whitelisting with other security controls such as antivirus, intrusion prevention, and patch management creates a multi-layered defense strategy, reducing the likelihood of breaches.

Administrators must also ensure that whitelisting policies are carefully managed to allow legitimate software updates and prevent business disruptions. Integrating whitelisting with logging and monitoring systems provides visibility into blocked application attempts and potential security incidents. By enforcing strict execution policies, organizations significantly reduce the risk of malware infections, enhance endpoint security, and maintain compliance with internal policies and industry regulations.

Question 3

A technician needs to configure secure remote access for employees using encryption and strong authentication. Which solution is appropriate?

A) VPN with multifactor authentication
B) NAT
C) Open Wi-Fi access
D) DHCP snooping

Answer: A

Explanation:

Securing remote access for employees is crucial for protecting sensitive data while enabling flexible work environments. Deploying a VPN with multifactor authentication is the best solution because it encrypts traffic between remote clients and corporate resources while ensuring that only authorized users can connect. Option B) NAT translates IP addresses but does not provide authentication or encryption. C) Open Wi-Fi access provides no security and exposes data to interception. D) DHCP snooping protects against rogue DHCP servers but does not secure remote access connections.

A VPN creates a secure, encrypted tunnel between the employee’s device and the corporate network, preventing eavesdropping or interception of sensitive information. Using protocols such as IPsec or SSL/TLS, VPNs encrypt data in transit, safeguarding emails, financial information, and proprietary documents. Multifactor authentication adds an extra layer of security by requiring multiple forms of verification, such as a password and a one-time code from a mobile device, ensuring that compromised credentials alone are insufficient to gain access.

Implementing a VPN with multifactor authentication ensures compliance with security standards and regulations, including HIPAA, PCI-DSS, and GDPR, which mandate secure transmission of sensitive data. Centralized management allows administrators to define policies for remote access, monitor connections, and revoke access if a device or user is compromised. VPNs also support granular access controls, enabling employees to reach only the resources necessary for their roles, minimizing exposure of sensitive systems.

Regularly updating VPN client software, enforcing strong password policies, and monitoring authentication logs further enhance security. By combining encryption, strong authentication, and centralized monitoring, organizations can maintain secure, reliable remote access that protects corporate assets while supporting productivity. This solution balances security and convenience, providing employees with access to necessary resources without compromising the integrity or confidentiality of enterprise networks.

Question 4

An administrator wants to segment network traffic to improve security and performance. Which technology achieves this?

A) VLAN
B) NAT
C) VPN
D) DHCP

Answer: A

Explanation:

Segmenting network traffic is a key strategy to enhance both security and performance in enterprise networks. VLANs (Virtual Local Area Networks) are the most effective solution because they logically divide a physical network into multiple isolated broadcast domains. Option B) NAT translates IP addresses but does not segment traffic internally. C) VPN secures remote connections but does not segment local traffic. D) DHCP assigns IP addresses but does not provide traffic isolation or segmentation.

VLANs improve network performance by reducing unnecessary broadcast traffic within each segment, preventing congestion and optimizing bandwidth utilization. They also enhance security by isolating sensitive systems, such as financial servers or HR databases, from general user traffic. Administrators can assign VLAN membership based on department, user role, or application type, enforcing policies that control communication between segments using access control lists or firewalls.

VLANs use 802.1Q tagging to identify traffic associated with each virtual network as it traverses switches, allowing multiple VLANs to coexist on the same physical infrastructure without interfering with each other. This approach reduces hardware costs by minimizing the number of switches required and allows flexible network expansion. Trunk ports are used to carry traffic for multiple VLANs between switches, while access ports connect devices to a specific VLAN.

Implementing VLANs also enhances monitoring and troubleshooting. Administrators can track traffic flows per segment, detect unusual activity, and isolate issues without impacting the entire network. Security policies, including firewalls and intrusion detection, can be applied at the VLAN level to protect sensitive resources. VLAN segmentation also supports compliance with regulatory requirements that mandate separation of critical or confidential data.

By strategically deploying VLANs, organizations achieve better traffic management, improved security, simplified administration, and cost-effective scalability. Proper planning ensures optimal placement, consistent configuration, and enforcement of communication policies, creating a robust and efficient network infrastructure that aligns with enterprise goals and compliance requirements.

Question 5

A technician needs to troubleshoot a slow internet connection caused by DNS resolution issues. Which tool should be used first?

A) nslookup
B) ipconfig
C) ping
D) traceroute

Answer: A

Explanation:

When troubleshooting slow internet connections caused by DNS resolution problems, nslookup is the most appropriate diagnostic tool. It allows administrators to query DNS servers directly to verify name resolution, identify misconfigurations, and detect delays or failures in resolving domain names. Option B) ipconfig provides IP configuration details but does not directly test DNS queries. C) ping tests network connectivity but does not provide DNS information. D) traceroute identifies the path packets take to reach a destination but does not analyze DNS resolution specifically.

Using nslookup, administrators can perform queries to different DNS servers, test forward and reverse resolution, and verify that domain names correctly resolve to IP addresses. This is critical because slow or failed DNS resolution often results in delayed website access or intermittent connectivity, which may appear as general network slowness. Nslookup allows specifying alternative DNS servers to determine whether issues are localized to a specific server or affect multiple DNS endpoints.

Nslookup can also provide detailed error codes, including server failures, non-existent domains, or timeouts, helping pinpoint the root cause. When combined with log analysis and monitoring tools, it enables administrators to detect recurring issues, misconfigured DNS zones, or outdated records that may impact network performance. Additionally, nslookup supports advanced queries such as MX, TXT, and CNAME records, assisting in troubleshooting email delivery problems and domain-related issues.

Addressing DNS resolution problems often improves overall network speed and reliability. Administrators can optimize caching policies, correct misconfigured settings, and ensure authoritative DNS servers are functioning properly. Nslookup provides a straightforward, reliable method to diagnose DNS-related slowness, allowing IT teams to restore optimal network performance efficiently. By starting with nslookup, organizations can quickly identify and resolve root causes of DNS delays, improving user experience, productivity, and the reliability of internet-connected services.

Question 6

A technician must configure a network device to allow multiple VLANs to communicate. Which technology should be used?

A) Router-on-a-stick
B) Port mirroring
C) NAT
D) DHCP relay

Answer: A

Explanation:

In enterprise networking, when multiple VLANs exist, they are isolated at Layer 2, preventing devices on different VLANs from communicating directly. A common solution for enabling inter-VLAN communication is the router-on-a-stick configuration. This approach involves a single router interface connected to a switch using a trunk port that carries multiple VLAN tags. Each VLAN is assigned a subinterface on the router, which acts as a gateway for devices within that VLAN. Option B) Port mirroring is used for network monitoring and does not facilitate inter-VLAN routing. C) NAT translates IP addresses for traffic leaving the internal network but does not route between VLANs. D) DHCP relay forwards DHCP requests but does not provide inter-VLAN communication.

Router-on-a-stick is efficient for smaller networks where a dedicated router interface per VLAN would be impractical or cost-prohibitive. The trunk port carries traffic from all VLANs, and the router’s subinterfaces examine VLAN tags to route packets appropriately. This enables devices on separate VLANs to communicate while maintaining logical network segmentation. The router can also enforce access control policies between VLANs, improving security by restricting unnecessary communication.

This configuration is critical for managing enterprise networks where departments or functions are separated into different VLANs, such as accounting, HR, and development. Without proper inter-VLAN routing, collaboration and resource sharing would require cumbersome physical connections or workarounds. Router-on-a-stick simplifies management while maintaining a high degree of network control.

Administrators must carefully configure subinterfaces, IP addresses, and VLAN tagging on both the switch and router to ensure seamless connectivity. Misconfiguration can lead to traffic drops or misrouted packets, so rigorous testing and verification are essential. Monitoring the interface for performance metrics, bandwidth usage, and errors helps maintain operational reliability.

Router-on-a-stick supports scalability by allowing additional VLANs to be added without needing extra physical interfaces. Combined with proper VLAN planning, trunking standards like 802.1Q, and security measures such as ACLs, it ensures efficient network traffic management, improved performance, and compliance with enterprise security policies. Implementing router-on-a-stick is a cornerstone of modern VLAN design, balancing cost, simplicity, and robust network segmentation.

Question 7

A user reports slow file access on a network share. Which tool helps identify latency caused by network performance issues?

A) Wireshark
B) nslookup
C) ipconfig
D) dig

Answer: A

Explanation:

Diagnosing slow file access over a network requires understanding where delays occur between the client and the server. Wireshark is the most effective tool for analyzing network performance issues because it captures and inspects network traffic in real time. It allows administrators to identify latency, packet loss, retransmissions, or errors that contribute to slow file access. Option B) nslookup focuses only on DNS resolution and does not analyze network performance. C) ipconfig shows IP configuration details but does not provide insights into packet-level latency. D) dig is similar to nslookup for DNS diagnostics but is not a performance tool.

Using Wireshark, administrators can capture traffic between the client and the file server and examine timestamps, sequence numbers, and packet delays. By analyzing TCP streams, it is possible to detect retransmissions caused by network congestion, identify excessive latency due to routing issues, and monitor throughput on the connection. Wireshark also allows filtering specific protocols, such as SMB, to focus on file-sharing traffic without being overwhelmed by unrelated packets.

Understanding the root cause of slow file access is critical in enterprise networks where employees rely on shared resources for productivity. Common issues include overloaded network segments, misconfigured switches, duplex mismatches, or high packet loss. Wireshark provides visibility into these problems, enabling administrators to pinpoint bottlenecks. Additionally, it can reveal security issues such as malformed packets or unauthorized traffic affecting performance.

Once identified, network issues can be resolved by optimizing switch configurations, upgrading hardware, adjusting routing, or balancing loads across multiple paths. Performance monitoring over time helps prevent recurrence and supports proactive maintenance. Wireshark’s detailed insights ensure network reliability, improve end-user experience, and help administrators make informed decisions to optimize traffic flow and reduce latency.

Question 8

A technician is implementing multifactor authentication for cloud applications. Which factor combines physical tokens with personal credentials?

A) Something you have and something you know
B) Something you are
C) Something you do
D) Location-based factor

Answer: A

Explanation:

Multifactor authentication (MFA) strengthens security by requiring two or more independent forms of verification. Combining physical tokens (like smart cards or key fobs) with personal credentials (passwords or PINs) represents the “something you have” and “something you know” factors. Option B) Something you are refers to biometrics such as fingerprints or facial recognition. C) Something you do refers to behavioral patterns like typing rhythm. D) Location-based factors rely on the physical or network location of the user but do not involve tokens or passwords.

MFA mitigates the risk of compromised passwords by requiring an additional factor that attackers are unlikely to possess. Physical tokens generate time-sensitive codes that must match user credentials to grant access, creating a layered defense. This combination is particularly effective for securing cloud applications, which are accessible remotely and often targeted by phishing or credential-stuffing attacks.

Implementing MFA involves integrating authentication mechanisms with the identity provider for cloud services. Administrators must ensure token management, including issuance, revocation, and replacement processes, are secure and user-friendly. Users must understand the authentication workflow, and training may be required to ensure smooth adoption. MFA policies can also enforce risk-based access, prompting additional factors for high-risk activities or unusual login attempts.

Combining “something you have” and “something you know” reduces the likelihood of unauthorized access even if one factor is compromised. This approach aligns with regulatory compliance standards such as GDPR, HIPAA, and PCI-DSS. MFA improves security posture without significantly hindering usability when implemented with modern solutions like mobile authenticators or hardware tokens. Organizations benefit from reduced account compromise risk, enhanced user accountability, and improved overall network and cloud security.

Question 9

A technician needs to prevent unauthorized devices from connecting to a corporate wireless network. Which method is most effective?

A) MAC address filtering
B) Disabling DHCP
C) Using open authentication
D) Static IP assignment

Answer: A

Explanation:

Preventing unauthorized devices from connecting to a corporate wireless network is essential to maintain confidentiality, integrity, and network performance. MAC address filtering is an effective method because it allows administrators to define a list of authorized devices that can associate with the access point, while blocking any other hardware. Option B) Disabling DHCP only prevents automatic IP assignment but does not stop manually configured devices from connecting. C) Open authentication provides no security and allows unrestricted access. D) Static IP assignment alone cannot prevent unauthorized device association since attackers can still use allowed IP addresses.

MAC address filtering works at Layer 2 by identifying devices based on their unique hardware addresses. Access points compare incoming connection requests against the authorized list and deny association to unrecognized MAC addresses. While this method is not foolproof against spoofing, it provides a first layer of control and is often combined with stronger authentication mechanisms such as WPA2-Enterprise.

Administrators should maintain an accurate and up-to-date MAC whitelist to account for new devices and employee turnover. This ensures operational continuity while enforcing security policies. Logging connection attempts from unauthorized devices helps identify potential intrusions or policy violations. MAC filtering is particularly useful in environments with a fixed set of known devices, such as office networks, where access can be strictly controlled.

Using MAC address filtering in conjunction with encryption and authentication creates a layered approach that strengthens wireless security. It deters casual or opportunistic attackers and complements enterprise policies, reducing the attack surface. While modern networks should not rely solely on MAC filtering, it remains a valuable tool when implemented as part of a broader security framework, providing additional assurance against unauthorized network access.

Question 10

A company wants to monitor network traffic to identify suspicious activity and potential intrusions. Which device is most appropriate?

A) IDS
B) Switch
C) Router
D) DHCP server

Answer: A

Explanation:

Monitoring network traffic to detect suspicious activity is a core component of cybersecurity in enterprise networks. An Intrusion Detection System (IDS) is specifically designed to analyze traffic for potential attacks, anomalies, or policy violations, alerting administrators to possible security incidents. Option B) Switches manage traffic forwarding at Layer 2 but do not perform security monitoring. C) Routers direct traffic between networks but are not primarily focused on intrusion detection. D) DHCP servers assign IP addresses but do not inspect or analyze network traffic.

IDS devices use signature-based detection, anomaly detection, or behavior analysis to identify malicious patterns. Signature-based systems compare network activity to a database of known attack patterns, while anomaly-based systems detect deviations from normal behavior, such as unexpected traffic spikes or unusual protocol usage. Both approaches provide early warning of attacks, enabling rapid response before significant damage occurs.

Deploying an IDS involves strategic placement at network entry points or critical segments to capture relevant traffic. Alerts generated by the IDS can be integrated with security information and event management (SIEM) systems, providing centralized analysis and correlation with other logs. Regular updates to signature databases and tuning of detection thresholds are crucial to minimize false positives and maintain effective monitoring.

By implementing an IDS, organizations improve situational awareness, enhance incident response capabilities, and support compliance with security regulations. Continuous traffic analysis enables identification of compromised devices, malware propagation, or unauthorized access attempts. Combined with firewalls, endpoint protection, and access controls, an IDS forms a critical layer in a multi-tiered defense strategy, helping organizations detect, respond to, and mitigate potential network threats effectively.

Question 11

A technician is troubleshooting slow web page loads across multiple clients. Which tool identifies packet loss and latency issues?

A) Ping
B) ipconfig
C) tracert
D) netstat

Answer: A

Explanation:

When users experience slow web page loads across multiple clients, it is critical to identify network performance bottlenecks. One of the primary tools for diagnosing packet loss and latency issues is Ping. Ping sends ICMP echo request packets to a target device and measures the time taken for responses, providing real-time feedback about network connectivity and responsiveness. By observing packet loss percentages and round-trip times, administrators can determine whether the delay originates from the local network, ISP, or destination server. Option B) ipconfig provides local IP configuration details but cannot measure network latency. Option C) tracert traces the path packets take across multiple routers and can show where delays occur, but it does not directly measure packet loss per hop over time. Option D) netstat shows active connections and listening ports but does not provide packet loss or latency information.

Ping is a foundational diagnostic tool for both Windows and Linux environments and is often the first step in troubleshooting network performance issues. Network administrators can send continuous ping requests to observe fluctuating latency, which may indicate network congestion, routing loops, or hardware issues. Consistently high round-trip times can point to overloaded switches, misconfigured routers, or external internet problems. Furthermore, analyzing ping results helps in detecting intermittent connectivity issues that may be difficult to observe through other monitoring methods.

In enterprise networks, ping can be combined with more advanced monitoring tools to collect long-term metrics for bandwidth utilization, error rates, and quality of service (QoS). For example, administrators may integrate ping with scripts or monitoring software to generate alerts when packet loss exceeds thresholds or latency spikes, providing proactive maintenance opportunities. Understanding ping output, including minimum, maximum, and average response times, is essential to identify trends and troubleshoot complex network environments effectively.

Additionally, interpreting ping results in combination with tracert enhances the diagnostic process by correlating latency with specific network hops. High packet loss at a particular router or firewall may indicate a misconfigured device, insufficient bandwidth allocation, or hardware failure. By systematically testing from multiple client locations, administrators can isolate whether the issue is localized to a segment of the network or affects broader network paths. This approach reduces troubleshooting time, enhances operational efficiency, and ensures consistent web performance for end users, aligning with best practices for network maintenance and proactive monitoring.

Question 12

Which network design concept isolates broadcast domains while allowing controlled communication between departments or functions?

A) VLANs
B) Subnetting
C) NAT
D) VPN

Answer: A

Explanation:

In modern enterprise networks, controlling broadcast traffic and ensuring efficient communication between departments is crucial. Virtual Local Area Networks (VLANs) are the most effective method to achieve this. VLANs segment a physical network into multiple logical networks, isolating broadcast domains while allowing controlled inter-VLAN communication through routers or Layer 3 switches. Option B) Subnetting divides IP address ranges to create logical network segments but does not inherently isolate broadcast domains at Layer 2. Option C) NAT translates IP addresses for external communication but does not segment broadcast traffic. Option D) VPNs encrypt traffic for secure remote access and do not provide Layer 2 broadcast isolation.

Implementing VLANs improves network performance by reducing unnecessary broadcast traffic, which can consume bandwidth and cause latency issues. Each VLAN functions as an independent network segment, allowing administrators to apply policies, prioritize traffic using Quality of Service (QoS), and enforce security measures specific to each group. For example, VLANs can separate sensitive departments such as finance and HR from general staff, enhancing confidentiality and regulatory compliance.

VLANs also provide flexibility for network design, allowing devices to be moved physically without requiring changes to IP addressing. By leveraging trunk links and 802.1Q tagging, multiple VLANs can coexist on the same switch infrastructure, optimizing hardware utilization. Inter-VLAN routing enables controlled communication, where access control lists (ACLs) regulate which VLANs can communicate, reducing attack surfaces and ensuring that only authorized devices interact.

From a troubleshooting perspective, VLAN segmentation simplifies network diagnostics by localizing traffic issues to specific VLANs. Monitoring and logging can focus on VLAN-specific activity, making problem identification faster and more precise. VLANs also support scalability, allowing new VLANs to be added as the organization grows without requiring significant infrastructure changes. Network administrators must carefully plan VLAN IDs, IP addressing, and trunk configurations to prevent misconfigurations that can lead to connectivity problems or broadcast storms.

VLAN implementation is a cornerstone of modern enterprise network design, balancing security, performance, and operational efficiency. Combined with subnetting, QoS, and robust routing protocols, VLANs provide granular control over traffic flow and enhance overall network resilience against congestion, unauthorized access, and performance degradation. By understanding the strategic use of VLANs, organizations can optimize resources while maintaining a secure, manageable, and high-performing network infrastructure.

Question 13

Which type of cloud service model provides hardware, storage, and networking while requiring users to manage operating systems and applications?

A) Infrastructure as a Service (IaaS)
B) Platform as a Service (PaaS)
C) Software as a Service (SaaS)
D) Function as a Service (FaaS)

Answer: A

Explanation:

Understanding cloud service models is vital for IT professionals preparing for the CompTIA Network+ and A+ exams. Infrastructure as a Service (IaaS) delivers fundamental computing resources such as virtual machines, storage, and networking over the cloud. Users are responsible for installing and managing operating systems, applications, and middleware while the provider handles the physical infrastructure. Option B) Platform as a Service (PaaS) provides a preconfigured environment for application development and deployment, reducing the management burden on the user. Option C) Software as a Service (SaaS) delivers fully managed applications, leaving the user with minimal control over underlying infrastructure. Option D) Function as a Service (FaaS) supports serverless computing, executing individual functions without the need for provisioning infrastructure.

IaaS allows organizations to rapidly scale infrastructure up or down based on demand, providing flexibility and cost efficiency compared to maintaining on-premises hardware. Users retain full control over software environments, enabling customization and support for legacy applications or specialized workloads. Cloud providers offer tools for provisioning, monitoring, and managing virtual resources, ensuring reliability, redundancy, and network connectivity while reducing capital expenditure.

Security responsibilities in an IaaS environment follow a shared model. The provider secures the physical data centers, hypervisors, and network hardware, while users must secure operating systems, applications, and data. This requires patch management, firewall configurations, antivirus solutions, and access controls. Organizations can also implement encryption for data at rest and in transit to meet compliance requirements such as HIPAA, PCI-DSS, or GDPR.

IaaS adoption enhances disaster recovery and business continuity strategies. Virtual machines and storage can be replicated across regions or availability zones, providing redundancy in case of hardware failure or natural disasters. Automated provisioning, snapshots, and backup solutions reduce downtime and allow rapid restoration of critical services. Additionally, IaaS supports hybrid environments where organizations combine on-premises and cloud resources, enabling seamless integration and migration over time.

IaaS is widely used for hosting web servers, development environments, testing, big data analytics, and other workloads requiring full control over software layers while avoiding the complexity and cost of physical infrastructure management. Proper understanding of IaaS capabilities, responsibilities, and limitations is crucial for IT professionals responsible for designing, deploying, and managing cloud solutions effectively.

Question 14

A network administrator needs to assign IP addresses automatically while controlling which devices receive specific addresses. Which solution is appropriate?

A) DHCP reservation
B) Static IP assignment
C) NAT
D) Port forwarding

Answer: A

Explanation:

In networks where administrators require automatic IP address assignment while maintaining control over specific devices, DHCP reservation is the optimal solution. DHCP reservation allows the Dynamic Host Configuration Protocol server to assign predefined IP addresses to devices based on their MAC addresses, ensuring that critical devices, such as servers, printers, or access points, always receive the same IP. Option B) Static IP assignment achieves consistent IPs but requires manual configuration, which is less efficient and prone to errors in larger networks. Option C) NAT translates internal IPs for external communication but does not assign IPs. Option D) Port forwarding directs specific external traffic to internal devices and does not manage IP assignment.

DHCP reservation combines the benefits of dynamic and static IP addressing. Devices not in the reservation table can still receive automatic IP assignments from the available pool, while reserved devices maintain consistent IPs for network management and service accessibility. This method enhances network organization, simplifies troubleshooting, and supports services that require predictable IP addresses, such as monitoring systems, printers, and VOIP devices.

Administrators configure DHCP reservations by associating a device’s MAC address with an IP address in the DHCP server configuration. This process allows the device to automatically receive the reserved IP whenever it connects to the network. DHCP logs provide visibility into assigned IPs, lease times, and client activity, enabling proactive management of network resources. Reservation can also support failover scenarios by ensuring critical devices retain the same address even if the primary DHCP server fails and backup servers take over.

Using DHCP reservation also enhances security by controlling which devices can obtain specific IPs, reducing IP conflicts and unauthorized device access. It simplifies network maintenance in dynamic environments where devices frequently join and leave the network. Combined with subnetting, VLANs, and access control measures, DHCP reservation ensures efficient address allocation, predictable network behavior, and a scalable infrastructure suitable for enterprise networks, educational institutions, and managed service providers.

Question 15

Which wireless security protocol uses AES encryption and is considered the most secure for enterprise Wi-Fi networks?

A) WPA2
B) WEP
C) WPA
D) TKIP

Answer: A

Explanation:

Wireless security is a critical component of enterprise network protection. WPA2, which stands for Wi-Fi Protected Access 2, is considered the most secure widely implemented protocol because it uses AES (Advanced Encryption Standard) encryption to protect wireless data traffic. AES provides strong encryption that is computationally intensive for attackers to crack. Option B) WEP is outdated, vulnerable to numerous attacks, and unsuitable for modern networks. Option C) WPA improved upon WEP but relies on TKIP, which has known vulnerabilities. Option D) TKIP (Temporal Key Integrity Protocol) was designed as a temporary fix for WEP and is no longer considered secure.

WPA2 operates in both personal (pre-shared key) and enterprise (802.1X authentication) modes. Enterprise mode leverages a RADIUS server to authenticate users individually, providing accountability and preventing unauthorized access. The protocol protects data integrity and confidentiality by encrypting traffic between clients and access points, significantly reducing the risk of eavesdropping, man-in-the-middle attacks, and session hijacking.

Implementing WPA2 requires configuring access points with strong passphrases, secure authentication servers, and proper key management policies. Administrators must also ensure client devices support AES encryption and receive regular firmware updates to address security vulnerabilities. WPA2 also supports the CCMP (Counter Mode with Cipher Block Chaining Message Authentication Code Protocol) encryption standard, which further enhances security and protects against replay attacks.

For enterprise networks, WPA2-Enterprise with 802.1X authentication ensures that each device has unique credentials, making network access control more granular and auditable. Regular monitoring, logging, and integration with intrusion detection systems provide visibility into potential threats. While WPA3 is the latest standard, WPA2 remains widely deployed and supported across devices, offering a robust combination of security, performance, and interoperability. Choosing WPA2 over older protocols like WEP or TKIP significantly strengthens network resilience against modern threats.

Question 16

A technician needs to identify which device on a network is causing frequent packet collisions and performance issues. Which tool is used?

A) Network analyzer
B) ipconfig
C) tracert
D) ping

Answer: A

Explanation:

In complex networking environments, frequent packet collisions and degraded performance often indicate underlying hardware or configuration issues. A network analyzer is the optimal tool for identifying which device is responsible. Network analyzers, also known as packet sniffers, capture and inspect network traffic in real time, providing detailed insights into packet composition, source and destination addresses, protocols, and errors such as collisions or retransmissions. By analyzing traffic patterns, administrators can pinpoint problematic devices, misconfigured ports, or overloaded switches contributing to congestion. Option B) ipconfig only displays IP configurations and cannot analyze traffic. Option C) tracert identifies routing paths and latency but does not detect packet collisions. Option D) ping measures connectivity and latency but provides minimal information about traffic integrity.

Network analyzers are critical for troubleshooting at Layer 2 and Layer 3, particularly in Ethernet networks where collisions may occur due to half-duplex configurations, faulty cabling, or hub deployment. Modern analyzers support filtering, protocol decoding, and traffic visualization, making it easier to focus on specific devices or segments. By observing metrics such as packet retransmission, CRC errors, and collision counts, administrators can isolate sources of network degradation without interrupting other services.

Using a network analyzer requires understanding of protocols like TCP/IP, UDP, ICMP, and Ethernet frame structure. Administrators can capture real-time traffic or perform historical analysis by storing packet captures for deeper examination. Analyzers also allow simulation of network conditions, helping IT teams test changes before deploying them network-wide. Integration with monitoring solutions can generate alerts when collision rates exceed thresholds or unusual traffic patterns are detected, providing proactive network management.

Network analyzers enhance security by detecting anomalies such as unauthorized devices, rogue access points, or potential intrusions. By correlating packet captures with device MAC addresses and IP assignments, administrators can maintain accountability and enforce policies. In enterprise settings, packet analysis also helps optimize Quality of Service (QoS) by identifying high-bandwidth applications, prioritizing critical traffic, and reducing congestion. Proper use of a network analyzer requires careful handling of sensitive data, as captured packets may contain confidential information. Overall, network analyzers are indispensable tools for maintaining high-performance, secure, and reliable networks, addressing both operational and troubleshooting needs.

Question 17

Which network topology physically connects all devices to a central hub or switch, providing simplified management and isolation of traffic?

A) Star topology
B) Mesh topology
C) Bus topology
D) Ring topology

Answer: A

Explanation:

Network topology selection directly impacts performance, fault tolerance, and management complexity. In a star topology, all devices are physically connected to a central hub or switch. This configuration centralizes traffic management, simplifies troubleshooting, and allows isolation of individual devices without affecting the rest of the network. Option B) Mesh topology involves interconnecting devices directly to multiple other devices, providing redundancy but increasing complexity. Option C) Bus topology uses a single backbone cable with terminators at each end, making fault isolation challenging and prone to collisions. Option D) Ring topology connects devices in a closed loop, where a single failure can disrupt the entire network unless redundant paths exist.

Star topology is widely implemented in modern Ethernet networks due to its manageability and scalability. Central switches act as traffic concentrators, controlling data flow and preventing collisions when using full-duplex connections. Each device communicates with the switch rather than directly with other devices, enhancing network efficiency and reducing broadcast congestion. This setup allows administrators to add or remove devices without disrupting network operations.

From a troubleshooting perspective, star topology simplifies problem identification. If one device or its connection fails, only that device is affected, and the rest of the network continues functioning normally. Administrators can quickly isolate faults to a single port, replace cables, or reboot endpoints without extensive downtime. Centralized switches also enable monitoring and logging of traffic patterns, helping identify bandwidth-hogging devices or security breaches.

Star topology is compatible with VLAN segmentation, Quality of Service (QoS) policies, and link aggregation, supporting enterprise scalability. Switches at the center may include advanced features such as port security, access control lists, and network segmentation, enhancing both security and performance. Additionally, the physical layout of star topology allows for predictable expansion, minimizing cabling issues and providing clear pathways for maintenance. While redundancy in a star network may require additional switches or connections, the benefits of centralized management, simplified diagnostics, and scalability outweigh the additional hardware costs. Overall, star topology remains a foundational design in both small and large enterprise networks due to its reliability, maintainability, and operational efficiency.

Question 18

Which protocol provides secure encrypted remote access to a network device, replacing insecure Telnet communications?

A) SSH
B) FTP
C) HTTP
D) SNMP

Answer: A

Explanation:

Secure remote management of network devices is essential for protecting sensitive configurations and data. SSH, or Secure Shell, provides encrypted remote access, replacing insecure Telnet communications, which transmit credentials and data in plaintext. SSH uses strong cryptographic algorithms to establish a secure connection between client and server, ensuring confidentiality, integrity, and authentication. Option B) FTP is used for file transfers and does not provide secure remote device management. Option C) HTTP is the standard protocol for web communication and is not designed for secure shell access. Option D) SNMP monitors and manages network devices but does not offer interactive remote command execution.

SSH supports both password and key-based authentication, allowing administrators to enforce strict access policies. Key-based authentication eliminates the need for plaintext passwords, reducing the risk of credential compromise. Session encryption ensures that network configuration commands, sensitive outputs, and administrative credentials are not exposed to potential eavesdroppers on the network. SSH also supports tunneling, which allows other protocols such as VNC or RDP to be securely transmitted over an encrypted channel, enhancing remote management capabilities.

From a network security perspective, using SSH is a best practice for configuring routers, switches, firewalls, and servers. Modern implementations support multiple encryption algorithms such as AES, RSA, and ECC, ensuring robust protection even against sophisticated attacks. Administrators can configure SSH to limit user permissions, restrict source IP addresses, and log all session activities, providing both operational control and auditability.

SSH can be used for automated tasks via scripts, enabling secure remote execution of commands, backups, and system updates. In enterprise networks, central management solutions often integrate SSH to maintain consistent configurations across multiple devices, ensuring compliance with security policies and reducing manual intervention. Replacing Telnet with SSH reduces the attack surface and strengthens the overall security posture of the network, particularly for organizations with remote administrators or geographically dispersed infrastructure. Proper SSH deployment, including regular key rotation, strong passphrases, and limited access policies, is critical for maintaining a secure and efficient enterprise network environment.

Question 19

A network administrator must ensure sensitive data is protected during transmission over public networks. Which method provides encryption and secure tunneling?

A) VPN
B) NAT
C) DHCP
D) ICMP

Answer: A

Explanation:

When transmitting sensitive data across public networks such as the internet, protecting information from interception and tampering is critical. A VPN, or Virtual Private Network, establishes an encrypted and secure tunnel between devices or networks, ensuring confidentiality and integrity. VPNs prevent unauthorized access by encrypting traffic using protocols like IPsec, SSL/TLS, or OpenVPN. Option B) NAT translates IP addresses for connectivity but does not provide encryption. Option C) DHCP automatically assigns IP addresses and network configurations but is unrelated to data security. Option D) ICMP facilitates network diagnostics but offers no security.

VPNs are widely used by organizations to connect remote employees, branch offices, and cloud services securely. By encrypting packets and authenticating endpoints, VPNs prevent eavesdropping, man-in-the-middle attacks, and data leakage. Tunnel protocols encapsulate packets within an encrypted payload, which can traverse unsecured networks without exposing the original content. Administrators can implement VPNs with strong authentication mechanisms, including certificates, multifactor authentication, or pre-shared keys, enhancing protection against unauthorized access.

Different VPN types support varied deployment scenarios. Remote-access VPNs enable individual users to securely connect to corporate networks, whereas site-to-site VPNs connect entire networks across different geographic locations. VPNs can also provide granular access control, routing only specific traffic through encrypted tunnels while leaving non-sensitive traffic unencrypted to optimize performance. Additionally, modern VPN solutions integrate logging, monitoring, and intrusion detection to detect anomalies and potential security breaches.

From an operational perspective, VPNs allow organizations to maintain compliance with regulations requiring data encryption during transmission, such as HIPAA, GDPR, or PCI-DSS. Performance considerations include encryption overhead, bandwidth limitations, and potential latency, which must be accounted for during deployment. Proper configuration, including strong encryption algorithms and secure key management, is essential to ensure VPN effectiveness. By implementing VPNs across public networks, administrators create a secure and private communication channel, preserving confidentiality, integrity, and availability of sensitive data.

Question 20

Which technology allows multiple devices to share a single public IP address while maintaining private internal IP addresses?

A) NAT
B) DHCP
C) VLAN
D) ARP

Answer: A

Explanation:

Network Address Translation (NAT) enables multiple devices within a private network to share a single public IP address when accessing external networks such as the internet. NAT modifies packet headers, translating private internal IP addresses into the public address and mapping return traffic back to the originating device. Option B) DHCP assigns IP addresses dynamically but does not perform translation. Option C) VLAN segments networks logically without changing IP addresses. Option D) ARP resolves MAC addresses from IP addresses but does not provide sharing of public IPs.

NAT is a fundamental technology in both small and large networks to conserve IPv4 address space, especially given the limited availability of public IP addresses. It provides an additional layer of security by obscuring internal network structure from external entities, reducing exposure to attacks. NAT can operate in various modes, including static NAT, dynamic NAT, and PAT (Port Address Translation). PAT, commonly called “NAT overload,” allows multiple internal hosts to share a single public IP address by using unique source port numbers to differentiate sessions.

From a practical perspective, NAT enables seamless internet connectivity for home networks, enterprise networks, and service provider infrastructures. Administrators can implement NAT on routers and firewalls to control traffic flow, maintain address translation tables, and enforce security policies. NAT also supports VPN and remote access solutions by ensuring consistent external IP representation while preserving internal addressing schemes.

Despite its benefits, NAT introduces challenges such as difficulties in peer-to-peer communications, application compatibility, and troubleshooting due to address translation complexity. Modern networks increasingly rely on IPv6 to eliminate the need for NAT; however, NAT remains prevalent in IPv4-dominated networks. Effective NAT configuration, monitoring, and integration with other security technologies such as firewalls and intrusion detection systems enhance both connectivity and protection. Understanding NAT principles is critical for network administrators, ensuring proper routing, address conservation, and secure external communication for multiple devices sharing limited IP resources.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!