Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 141:
What is the primary purpose of a route map in Cisco IOS?
A) To create static routes automatically
B) To control route redistribution and policy-based routing
C) To enable dynamic routing protocols
D) To configure NAT translations
Answer: B
Explanation:
Route maps are powerful and flexible policy tools in Cisco IOS used primarily to control route redistribution between routing protocols and implement policy-based routing decisions. Route maps function similarly to access control lists but provide much more granular control over routing decisions by allowing administrators to match various packet or route characteristics and then apply specific actions based on those matches. This flexibility makes route maps essential for complex routing scenarios where simple permit or deny decisions are insufficient.
In route redistribution scenarios, route maps control which routes are redistributed from one routing protocol into another and can modify route attributes during the redistribution process. Administrators can match routes based on criteria such as prefix lists, access lists, route tags, metric values, next-hop addresses, or route sources. Once matched, route maps can set various attributes including metric values, metric types, route tags, next-hop addresses, administrative distances, or community values for BGP routes. This selective redistribution prevents unwanted routes from entering routing domains and ensures proper route attribute manipulation.
For policy-based routing, route maps examine packet characteristics rather than route attributes. Packets can be matched based on source or destination IP addresses, packet length, protocol types, or type of service values. Matched packets can then be forwarded based on policy decisions rather than normal destination-based routing table lookups. This enables traffic engineering, load distribution across multiple paths, or routing different traffic types through specific paths based on business requirements.
Route maps consist of numbered sequences that are processed in order from lowest to highest sequence number. Each sequence contains match and set statements. If all match conditions in a sequence are met, the set actions are applied and processing stops. If match conditions are not met, processing continues to the next sequence. An implicit deny exists at the end of every route map.
Option A is incorrect because route maps do not create static routes automatically; static routes are manually configured. Option C is incorrect because dynamic routing protocols are enabled through routing protocol configuration commands. Option D is incorrect because NAT translations use different configuration mechanisms, though route maps can be used with NAT in some advanced scenarios.
Question 142:
Which command displays the MAC address table on a Cisco switch?
A) show mac-address
B) show mac address-table
C) show arp
D) show cam table
Answer: B
Explanation:
The command show mac address-table displays the MAC address table on Cisco switches, providing visibility into which MAC addresses have been learned on which switch ports. This table is fundamental to switch operation as it enables the switch to make forwarding decisions by mapping destination MAC addresses to specific outbound ports. The MAC address table is dynamically built as the switch learns source MAC addresses from incoming frames and associates them with the ports on which they were received.
The output of show mac address-table includes several columns of information: the VLAN number to which the MAC address belongs, the MAC address itself in hexadecimal format, the type of entry which can be dynamic (learned through normal operation), static (manually configured), or other special types, and the port or interface where the MAC address was learned. Dynamic entries have an aging timer, typically 300 seconds by default, after which they are removed from the table if no frames are received from that MAC address. This aging mechanism prevents the table from becoming filled with stale entries.
Network administrators use this command extensively for troubleshooting connectivity issues, verifying that devices are properly connected and communicating, identifying which port a specific device is connected to, detecting potential MAC address conflicts or spoofing, and understanding VLAN membership of connected devices. The command can be filtered using various options such as specifying a particular VLAN, interface, or MAC address to narrow the output for easier analysis in large networks.
Additional variations of the command provide more specific information. The show mac address-table dynamic command displays only dynamically learned entries, show mac address-table static shows manually configured entries, show mac address-table aging-time displays the current aging timer value, and show mac address-table count provides statistics about the number of entries in the table.
Option A is incorrect because show mac-address is not complete valid syntax on most Cisco switch platforms. Option C is incorrect because show arp displays the ARP table mapping IP addresses to MAC addresses at Layer 3, not the MAC address table at Layer 2. Option D is incorrect because while older Catalyst switches used show cam table, modern IOS versions use show mac address-table.
Question 143:
What is the purpose of the passive-interface command in OSPF configuration?
A) To disable OSPF on an interface completely
B) To prevent sending OSPF hello packets while still advertising the network
C) To reduce OSPF priority to zero
D) To enable OSPF authentication
Answer: B
Explanation:
The passive-interface command in OSPF configuration prevents the router from sending OSPF hello packets out the specified interface while still advertising that interface’s network in OSPF updates to other routers. This selective suppression of hello packets is valuable for interfaces connected to networks where no OSPF neighbors exist or should exist, such as user access segments, stub networks, or networks containing only end devices. By making an interface passive, administrators improve security by not exposing OSPF protocol operations to untrusted segments and reduce unnecessary protocol overhead on networks where neighbor adjacencies are not needed.
When an interface is configured as passive in OSPF, several important behaviors occur. The router stops sending OSPF hello packets out that interface, meaning no OSPF neighbor relationships can form through that interface. However, the network associated with the passive interface continues to be advertised in OSPF LSAs sent to other OSPF neighbors on non-passive interfaces. This ensures that other routers in the OSPF domain learn about the passive interface’s network and can route traffic to it appropriately. The router continues to listen for OSPF packets on the passive interface, though this rarely results in any action since no hellos are being sent.
The passive-interface command is particularly useful in hub-and-spoke topologies where the spoke routers should not form OSPF adjacencies with each other, in DMZ or guest networks where OSPF operations should not be visible to potentially untrusted hosts, and in scenarios where interfaces connect to end-user segments that should be advertised in OSPF but do not require OSPF neighbor relationships. The command can be configured per-interface using passive-interface followed by the interface identifier, or all interfaces can be made passive by default using passive-interface default, then selectively enabling OSPF on specific interfaces using no passive-interface.
Best practices recommend using passive-interface on all interfaces that do not require OSPF neighbor adjacencies. This reduces attack surface, minimizes processing overhead, and prevents accidental adjacency formation with unauthorized devices.
Option A is incorrect because the interface is not completely disabled; the network is still advertised. Option C is incorrect because passive-interface does not affect OSPF priority values. Option D is incorrect because OSPF authentication is configured using separate authentication commands.
Question 144:
Which IPv6 address type is used for one-to-nearest communication?
A) Unicast
B) Multicast
C) Anycast
D) Broadcast
Answer: C
Explanation:
Anycast is the IPv6 address type designed specifically for one-to-nearest communication, where packets sent to an anycast address are delivered to the nearest interface (in terms of routing distance) among a group of interfaces that share the same anycast address. This communication paradigm is particularly valuable for distributed services, load balancing, and redundancy scenarios where multiple servers or devices provide the same service and clients should automatically connect to the closest or most optimal instance without requiring specific knowledge of individual server locations.
In anycast operation, multiple devices are configured with the same IPv6 address, typically on loopback or other interfaces. Routing protocols treat this address like any other destination and advertise routes to it. When a client sends a packet to the anycast address, the routing infrastructure automatically delivers the packet to the topologically nearest device advertising that address based on routing metrics and path costs. If the nearest device becomes unavailable, routing protocols automatically converge and traffic is redirected to the next nearest device sharing the anycast address, providing automatic failover and redundancy.
Common use cases for anycast include DNS root servers where multiple physical servers share the same IP addresses globally and clients automatically reach the nearest one, content delivery networks where edge servers share anycast addresses to serve content from locations closest to users, and load balancing for services where geographic or network proximity is important. IPv6 explicitly defines and supports anycast addresses, with certain address ranges reserved for specific anycast purposes such as subnet-router anycast addresses that represent all routers on a link.
An important distinction is that anycast addresses are allocated from the unicast address space and are syntactically identical to unicast addresses. The anycast behavior is determined by configuration rather than by the address format itself. Multiple devices are simply configured with the same address, and routing ensures packets reach the nearest one.
Option A is incorrect because unicast addresses are for one-to-one communication with a single destination. Option B is incorrect because multicast is one-to-many communication with multiple receivers. Option D is incorrect because broadcast does not exist in IPv6; multicast addresses serve the functions previously handled by broadcast in IPv4.
Question 145:
What is the default VLAN on a Cisco switch?
A) VLAN 0
B) VLAN 1
C) VLAN 10
D) VLAN 1002
Answer: B
Explanation:
VLAN 1 is the default VLAN on all Cisco switches and serves as the initial VLAN assignment for all switch ports upon factory reset or initial configuration. Every Cisco switch comes preconfigured with VLAN 1 as the default VLAN, and all ports are assigned to VLAN 1 by default, allowing immediate basic connectivity without any VLAN configuration. Additionally, VLAN 1 serves as the default native VLAN for trunk links and carries certain control plane protocols by default, making it a fundamental component of Cisco switch operations.
VLAN 1 has several special characteristics that distinguish it from other VLANs. It cannot be deleted or removed from the switch, though it can be administratively shut down in some switch models. VLAN 1 is the default VLAN for all switch management protocols including VTP, CDP, DTP, and PAgP, which typically operate on VLAN 1 unless specifically configured otherwise. Additionally, switch management interfaces like the VLAN 1 SVI (Switch Virtual Interface) are commonly used for in-band management access to the switch itself.
Despite its convenience, using VLAN 1 for production traffic presents security concerns. Because VLAN 1 is well-known and predictable, it becomes a target for various attacks including VLAN hopping and unauthorized access attempts. Security best practices strongly recommend moving all user and data traffic off VLAN 1, changing the native VLAN on trunk ports from VLAN 1 to an unused VLAN, using VLAN 1 only for switch management or not at all, and implementing proper access controls on any interfaces remaining in VLAN 1. Many security frameworks and compliance standards require demonstrating that VLAN 1 is not used for production traffic.
Modern network designs typically create separate VLANs for different traffic types and purposes, such as user data VLANs, voice VLANs for IP phones, management VLANs for network device administration, and guest VLANs for visitor access. VLAN 1 might be left unused or used only for specific management purposes with strict access controls.
Option A is incorrect because VLAN 0 is reserved and not used for normal operations. Option C is incorrect because VLAN 10 is not a default and must be explicitly created. Option D is incorrect because VLAN 1002 is one of the default token ring VLANs, not the default Ethernet VLAN.
Question 146:
Which protocol provides automatic failover for default gateways in a LAN environment?
A) STP
B) HSRP
C) CDP
D) LLDP
Answer: B
Explanation:
HSRP (Hot Standby Router Protocol) is a Cisco proprietary first-hop redundancy protocol that provides automatic failover for default gateways in LAN environments, ensuring continuous network connectivity for end devices even when their configured default gateway router fails. HSRP allows multiple routers to work together in a group, presenting a single virtual IP address and virtual MAC address to end devices. One router in the group serves as the active router handling all traffic, while other routers serve as standby routers ready to take over immediately if the active router becomes unavailable.
The HSRP operation involves several key components and states. Routers participating in an HSRP group exchange hello messages, typically every 3 seconds, to monitor each other’s availability and advertise their priority values. The router with the highest priority becomes the active router, while the router with the second-highest priority becomes the standby router. Additional routers in the group remain in listen state. The active router responds to ARP requests for the virtual IP address using the virtual MAC address, ensuring that end device traffic is directed to it. When the standby router stops receiving hello messages from the active router for the hold time period (default 10 seconds), it assumes the active role and begins responding to the virtual IP address.
HSRP supports several advanced features including interface tracking, where the active router monitors specific interfaces and automatically reduces its priority if those interfaces fail, ensuring that a router with failed uplinks does not remain active. Preemption can be configured to allow a router with higher priority to reclaim the active role after recovering from a failure. Multiple HSRP groups can be configured on a single interface, enabling load sharing by having different routers active for different virtual IP addresses.
Common alternatives to HSRP include VRRP (Virtual Router Redundancy Protocol), an open standard providing similar functionality, and GLBP (Gateway Load Balancing Protocol), which provides both redundancy and load balancing by allowing multiple routers to simultaneously forward traffic for the same virtual gateway.
Option A is incorrect because STP prevents Layer 2 loops in switched networks, not gateway redundancy. Option C is incorrect because CDP is a discovery protocol for identifying connected Cisco devices. Option D is incorrect because LLDP is a vendor-neutral discovery protocol, not a redundancy protocol.
Question 147:
What is the maximum number of VLANs supported in the standard VLAN range?
A) 1005
B) 1024
C) 4094
D) 4096
Answer: A
Explanation:
The standard VLAN range on Cisco switches supports a maximum of 1005 VLANs, numbered from VLAN 1 to VLAN 1005. This range is defined by the original IEEE 802.1Q standard and is stored in the vlan.dat file in the switch’s flash memory. The standard range VLANs have full support for VTP (VLAN Trunking Protocol) propagation, can be created, modified, and deleted from VLAN database mode or global configuration mode, and are compatible with all switch models and software versions.
Within the standard VLAN range, certain VLANs are reserved for specific purposes and cannot be deleted or modified. VLAN 1 is the default VLAN and default native VLAN for trunk ports, serving special purposes in switch operations. VLANs 1002 through 1005 are reserved for legacy token ring and FDDI VLANs, remnants from older networking technologies that are rarely used in modern networks but remain reserved for backward compatibility. These reserved VLANs cannot be deleted, though they can generally be ignored in Ethernet-only environments.
The standard VLAN range was sufficient for most network deployments when originally defined, but modern large-scale data center and service provider networks often require more VLANs than the standard range provides. This limitation led to the introduction of extended VLANs (1006-4094), which provide additional capacity but with some operational differences including requirement for VTP transparent or VTP version 3 mode, storage in the running configuration rather than vlan.dat file, and manual configuration synchronization between switches rather than automatic VTP propagation.
Network design best practices recommend careful VLAN planning to avoid exhausting available VLANs. Strategies include using the same VLAN numbers for consistent purposes across different switch stacks, eliminating unused VLANs, implementing VLAN summarization or aggregation where possible, and considering extended VLANs when standard range capacity is insufficient. Documentation of VLAN assignments and purposes is essential for maintaining organized and scalable network designs.
Option B is incorrect because 1024 is not a VLAN boundary on Cisco switches. Option C is incorrect because 4094 is the maximum of the extended range, not standard range. Option D is incorrect because while 4096 is the theoretical maximum including VLAN 0 and 4095, only VLANs 1-4094 are usable.
Question 148:
Which EIGRP packet type is used to acknowledge receipt of EIGRP messages?
A) Hello
B) Update
C) Query
D) ACK
Answer: D
Explanation:
The ACK (Acknowledgment) packet is the EIGRP packet type specifically designed to acknowledge receipt of reliable EIGRP messages including update, query, and reply packets. EIGRP uses a reliable transport mechanism called Reliable Transport Protocol (RTP) to ensure critical routing information is successfully delivered between neighbors. When a router sends a reliable EIGRP packet, it expects to receive an ACK from the receiving router confirming successful delivery. This acknowledgment mechanism prevents routing information loss and ensures database synchronization between EIGRP neighbors.
ACK packets are actually hello packets with an empty body but containing a non-zero acknowledgment number in the EIGRP header. The acknowledgment number corresponds to the sequence number of the packet being acknowledged. This design efficiently reuses the hello packet format rather than creating a separate packet type. ACK packets are sent as unicast messages directly to the router that sent the original reliable packet, rather than being multicast to all neighbors. If an ACK is not received within the retransmission timeout period, the sending router retransmits the reliable packet up to 16 times before declaring the neighbor dead.
The reliable transport mechanism is crucial for EIGRP’s stability and correct operation. Update packets containing routing information must be reliably delivered to ensure all routers have accurate topology information. Query packets sent during route computations need acknowledgment to confirm neighbors received the query and will respond. Reply packets answering queries require reliable delivery to complete the DUAL algorithm’s convergence process. Without this reliability, EIGRP could experience routing loops, black holes, or inconsistent routing tables.
Hello packets themselves do not require acknowledgment because they serve as keepalives and are sent periodically. Missing a few hello packets is tolerable as long as some hellos arrive within the hold time. This unreliable delivery of hellos reduces protocol overhead while the reliable delivery of routing updates ensures accuracy. The combination of reliable and unreliable transport mechanisms gives EIGRP both efficiency and reliability.
Option A is incorrect because hello packets establish and maintain neighbor relationships but do not acknowledge other messages. Option B is incorrect because update packets carry routing information rather than serving as acknowledgments. Option C is incorrect because query packets are used to find alternate paths during route computation, not acknowledge messages.
Question 149:
What is the purpose of the ip helper-address command?
A) To configure a static IP address
B) To forward broadcast packets to specific servers
C) To enable IP routing
D) To configure NAT translations
Answer: B
Explanation:
The ip helper-address command configures a Cisco router to forward broadcast packets to specific server addresses, enabling services like DHCP, TFTP, DNS, and others to function across router boundaries in different IP subnets. Without this command, broadcast packets are not forwarded by routers because broadcasts are confined to their local broadcast domain. The helper address functionality allows routers to intercept certain broadcast packets, convert them to unicast packets addressed to the specified helper address, and forward them to servers in different subnets.
When ip helper-address is configured on an interface, the router examines broadcast packets arriving on that interface and forwards specific UDP port numbers to the configured helper address by default. These default forwarded ports include UDP port 67 for DHCP/BOOTP server, UDP port 68 for DHCP/BOOTP client, UDP port 69 for TFTP, UDP port 37 for Time service, UDP port 49 for TACACS, UDP port 53 for DNS, UDP port 137 for NetBIOS name service, and UDP port 138 for NetBIOS datagram service. The router changes the destination IP address from the broadcast address to the unicast helper address and forwards the packet, allowing the centralized server to receive and respond to the request.
The most common use case for ip helper-address is enabling DHCP services across multiple subnets with a centralized DHCP server. When a client sends a DHCP discover broadcast, the router on the client’s subnet forwards it to the DHCP server’s IP address configured as the helper address. The DHCP server responds with an offer, and the router relays it back to the client. Multiple helper addresses can be configured on a single interface, causing the router to forward broadcasts to multiple servers for redundancy or load distribution.
Administrators can customize which UDP ports are forwarded using the ip forward-protocol udp command followed by the port number. This flexibility allows supporting additional services beyond the defaults or removing default forwarding for unused services to reduce unnecessary traffic. Security considerations include ensuring helper addresses point only to trusted servers and implementing access control to prevent abuse.
Option A is incorrect because static IP addresses are configured with the ip address command, not ip helper-address. Option C is incorrect because IP routing is enabled with ip routing at global configuration. Option D is incorrect because NAT is configured using ip nat commands, not helper addresses.
Question 150:
Which command shows the current configuration running on a Cisco device?
A) show startup-config
B) show running-config
C) show config
D) show version
Answer: B
Explanation:
The show running-config command displays the current active configuration running in the RAM of a Cisco device. This command is one of the most frequently used commands in Cisco IOS for viewing the device’s operational configuration, including all interface settings, routing protocols, security policies, VLANs, access control lists, and every other configuration element currently in effect. The running configuration represents the live, active settings that the device is currently using to operate and can be modified in real-time through configuration commands.
The running configuration resides in volatile RAM memory, meaning it is lost if the device is powered off or rebooted unless explicitly saved to non-volatile storage. When administrators make configuration changes in configuration mode, those changes immediately take effect in the running configuration. To preserve changes across reboots, the running configuration must be copied to the startup configuration using the copy running-config startup-config command or the shorthand write memory command. This separation between running and startup configurations allows administrators to test changes without committing them permanently.
The show running-config command output can be extensive on complex devices with many configured features. To make viewing more manageable, the command supports several modifiers and filters. The pipe symbol followed by section, include, or exclude keywords allows filtering output to show only relevant portions. For example, show running-config | section interface displays only interface configurations, show running-config | include ip address shows lines containing “ip address,” and show running-config interface gigabitethernet0/0 displays configuration for a specific interface only.
Security best practices recommend restricting access to view running configurations because they contain sensitive information including passwords (even though encrypted), SNMP community strings, authentication keys, and network topology details. Role-based access control should limit which users can execute show running-config. Additionally, the service password-encryption command encrypts passwords in the running configuration, though this provides only basic obfuscation rather than strong cryptographic protection.
Option A is incorrect because show startup-config displays the configuration saved in NVRAM that loads at boot time, not the currently running configuration. Option C is incorrect because show config is not a valid command in Cisco IOS. Option D is incorrect because show version displays software version, hardware information, and uptime, not configuration details.
Question 151:
What is the purpose of VLSM (Variable Length Subnet Masking)?
A) To create VLANs automatically
B) To use different subnet masks for different subnets to optimize address space
C) To encrypt routing updates
D) To enable multicast routing
Answer: B
Explanation:
Variable Length Subnet Masking (VLSM) is an IP addressing technique that allows network administrators to use different subnet masks for different subnets within the same classful network, enabling more efficient utilization of IP address space. Before VLSM, all subnets within a network had to use the same subnet mask, often resulting in significant address waste when subnets of different sizes were needed. VLSM overcomes this limitation by permitting subnets with masks appropriate to their specific host requirements, dramatically improving addressing efficiency.
VLSM works by subdividing IP address space hierarchically, allocating larger address blocks to subnets with many hosts and smaller blocks to subnets with fewer hosts. For example, a point-to-point WAN link requiring only two host addresses can use a /30 subnet mask providing exactly two usable addresses, while a user access segment with 100 hosts might use a /25 mask providing 126 usable addresses. Without VLSM, both subnets would need to use the same mask, wasting addresses on the WAN link or insufficient addresses for the access segment.
Implementing VLSM requires careful planning and hierarchical addressing design. Administrators typically start by allocating the largest required subnets first, then subdividing remaining address space for progressively smaller subnets. This approach prevents address overlap and ensures efficient utilization. VLSM also requires routing protocols that support subnet mask information in routing updates, such as OSPF, EIGRP, IS-IS, and BGP. Older classful routing protocols like RIPv1 and IGRP do not support VLSM because they do not advertise subnet masks with routes.
The benefits of VLSM extend beyond address conservation. VLSM enables route summarization, where multiple subnets can be aggregated into a single routing advertisement by using a less specific summary address. This reduces routing table size, decreases routing update overhead, and improves routing protocol convergence times. Effective VLSM design combined with proper summarization creates scalable, efficient network addressing schemes essential for large enterprise and service provider networks.
Option A is incorrect because VLSM relates to IP addressing, not VLAN creation. Option C is incorrect because VLSM does not provide encryption; it addresses IP addressing efficiency. Option D is incorrect because multicast routing is enabled through separate multicast protocols and configurations.
Question 152:
Which OSPF state indicates that database synchronization has been completed between neighbors?
A) Init
B) 2-Way
C) ExStart
D) Full
Answer: D
Explanation:
The Full state in OSPF represents the final and fully operational state where database synchronization has been completed between neighbors and the routers have identical link-state databases. Reaching the Full state indicates that the routers have successfully exchanged all link-state advertisements (LSAs), loaded them into their topology databases, and are ready to calculate routes using the SPF algorithm. Only neighbors in the Full state are considered fully adjacent and participate in routing calculations, making this state essential for proper OSPF operation.
The process of reaching the Full state involves several preceding states and exchanges. After establishing bidirectional communication (2-Way state), potential adjacent neighbors proceed through the ExStart state where they negotiate master-slave relationships and initial sequence numbers for database exchange. In the Exchange state, routers send database description packets summarizing their link-state databases. During the Loading state, routers request specific LSAs they are missing using link-state request packets and receive the LSAs via link-state update packets. Finally, when all LSAs have been received and acknowledged, neighbors transition to the Full state.
Not all OSPF neighbors reach the Full state. On multi-access networks like Ethernet, only the designated router (DR) and backup designated router (BDR) form Full adjacencies with all other routers. Non-DR routers (DROTHERs) form Full adjacencies only with the DR and BDR, remaining in 2-Way state with each other. This design reduces the number of adjacencies required on multi-access segments, improving scalability. On point-to-point links, neighbors always proceed to Full state since only two routers exist.
Monitoring OSPF neighbor states using the show ip ospf neighbor command is a critical troubleshooting activity. Neighbors stuck in states before Full, such as ExStart or Exchange, indicate problems with database synchronization, MTU mismatches, or access control lists blocking OSPF packets. Neighbors repeatedly transitioning between states suggest network instability, timer mismatches, or configuration inconsistencies. A stable Full state indicates healthy OSPF operation and proper adjacency formation.
Option A is incorrect because Init state indicates one-way communication where hellos have been received but bidirectional communication is not yet established. Option B is incorrect because 2-Way state indicates bidirectional communication but database exchange has not started. Option C is incorrect because ExStart state is where database exchange begins with master-slave negotiation, not completion.
Question 153:
What is the function of the switchport mode access command?
A) To configure a port as a trunk port
B) To configure a port as an access port for a single VLAN
C) To enable port security
D) To disable the port
Answer: B
Explanation:
The switchport mode access command configures a switch port as an access port, which is a port type dedicated to connecting end devices such as workstations, servers, printers, or IP phones that belong to a single VLAN. Access ports do not carry traffic for multiple VLANs and do not use VLAN tagging; instead, they provide untagged connectivity for devices that are typically unaware of VLAN concepts. This command is fundamental to basic switch configuration and is typically one of the first commands applied when configuring ports for end-user connectivity.
When a port is configured in access mode, several important behaviors are established. The port will not form trunk links with connected devices and will not respond to Dynamic Trunking Protocol (DTP) negotiations from other switches, preventing accidental trunk formation. All frames entering the port are assumed to belong to the configured access VLAN and are internally tagged with that VLAN ID as they traverse the switch fabric. Frames exiting the port have VLAN tags removed, presenting untagged Ethernet frames to the connected device. This ensures compatibility with standard network interface cards and operating systems that do not understand VLAN tagging.
Access ports are configured with a specific VLAN assignment using the switchport access vlan command following the switchport mode access command. For example, switchport access vlan 10 assigns the port to VLAN 10. If the VLAN does not exist, the switch typically creates it automatically. If no VLAN is explicitly specified, access ports default to VLAN 1. Additional configuration on access ports commonly includes port security to restrict which MAC addresses can connect, spanning tree PortFast to immediately transition to forwarding state, and storm control to prevent broadcast storms from connected devices.
Best practices for access port configuration include explicitly configuring ports as access ports rather than relying on default behavior, disabling DTP on access ports using switchport nonegotiate to prevent trunk negotiation attempts, and implementing port security to prevent unauthorized device connections. These practices improve security and prevent configuration errors that could compromise network segmentation.
Option A is incorrect because trunk ports are configured with switchport mode trunk, not access. Option C is incorrect because port security is configured separately using switchport port-security commands. Option D is incorrect because ports are disabled using the shutdown command, not switchport mode access.
Question 154:
Which routing protocol uses bandwidth and delay as default metrics?
A) RIP
B) OSPF
C) EIGRP
D) BGP
Answer: C
Explanation:
EIGRP (Enhanced Interior Gateway Routing Protocol) is the routing protocol that uses bandwidth and delay as its default metric components for path calculation. EIGRP’s composite metric formula is designed to provide more sophisticated and accurate path selection than simple hop count by considering multiple interface characteristics. Although EIGRP’s metric formula theoretically supports five components—bandwidth, delay, load, reliability, and MTU—only bandwidth and delay are used in the default configuration, with the K-values determining which components affect the metric calculation.
The EIGRP metric calculation uses the formula: Metric = [K1 × bandwidth + (K2 × bandwidth)/(256 – load) + K3 × delay] × [K5/(reliability + K4)]. In the default configuration, K1 and K3 are set to 1, while K2, K4, and K5 are set to 0, effectively reducing the formula to: Metric = bandwidth + delay. More precisely, the calculated metric is 256 × (10^7/minimum bandwidth + cumulative delay/10), where bandwidth is in kilobits per second and delay is in tens of microseconds. This formula identifies paths with higher bandwidth and lower cumulative delay as preferred routes.
Bandwidth in the EIGRP calculation represents the minimum bandwidth along the path from source to destination, creating a bottleneck-aware metric. A path is only as fast as its slowest link, so using minimum bandwidth ensures realistic path assessment. Delay represents the cumulative interface delay along the path, with each interface contributing its configured delay value. Delay typically correlates with link types—fiber links have lower delay than satellite links—providing differentiation beyond simple bandwidth considerations.
Network administrators can manipulate EIGRP path selection by adjusting interface bandwidth or delay values without changing the actual physical characteristics. This capability enables traffic engineering and policy-based routing, though changes should be documented carefully to avoid confusion. The K-values can also be modified to include load and reliability in metric calculations, though this is rarely done because dynamically changing metrics can cause routing instability.
Option A is incorrect because RIP uses hop count as its only metric. Option B is incorrect because OSPF uses cost based primarily on bandwidth. Option D is incorrect because BGP uses path attributes like AS path length, not bandwidth and delay.
Question 155
A network engineer is configuring OSPF authentication on a multi-access network segment. Which authentication method provides the strongest security for OSPF adjacencies?
A) Type 0 (null authentication)
B) Type 1 (clear text authentication)
C) Type 2 (MD5 authentication)
D) OSPFv3 IPsec authentication
Answer: D
Explanation:
OSPFv3 IPsec authentication provides the strongest security for OSPF adjacencies by leveraging IPsec’s Authentication Header (AH) or Encapsulating Security Payload (ESP) protocols. IPsec provides cryptographic authentication, encryption, and integrity checking that is significantly more robust than legacy authentication methods. OSPFv3 integrates directly with IPsec for authentication and optional encryption of routing protocol traffic. This approach uses modern cryptographic algorithms and key management, providing protection against replay attacks, man-in-the-middle attacks, and packet tampering. IPsec authentication supports both authentication headers and encrypted payloads, offering flexibility between authentication-only and authentication-plus-encryption modes.
Option A is incorrect because Type 0 null authentication provides no security whatsoever, allowing any router to form OSPF adjacencies without authentication. This configuration leaves networks vulnerable to malicious routing updates, route injection attacks, and unauthorized network topology modifications. Null authentication is only appropriate for isolated lab environments, never production networks requiring any level of security. Without authentication, attackers can easily disrupt routing by injecting false routing information.
Option B is incorrect because Type 1 clear text authentication transmits passwords unencrypted across the network, making them easily interceptable through packet capture. While clear text authentication prevents accidental misconfigurations, it provides no protection against determined attackers with network access. Anyone capturing OSPF packets can read passwords directly and use them to inject malicious routes. Clear text authentication is considered deprecated and should not be used in security-conscious environments.
Option C is incorrect because while Type 2 MD5 authentication provides better security than clear text by using cryptographic hashing, it’s vulnerable to replay attacks and uses outdated MD5 algorithm with known weaknesses. MD5 authentication doesn’t provide encryption of routing updates, only authentication that packets originated from authorized routers. Modern security standards recommend moving away from MD5 due to collision vulnerabilities. IPsec authentication provides superior security with stronger algorithms and anti-replay protection.
Question 156
An enterprise network is experiencing intermittent connectivity issues with wireless clients. Analysis shows that clients are associated with access points but cannot obtain IP addresses. Which troubleshooting step should be performed first?
A) Replace all wireless access points
B) Verify DHCP server availability and scope configuration
C) Increase wireless transmit power
D) Change all wireless channels
Answer: B
Explanation:
Verifying DHCP server availability and scope configuration should be the first troubleshooting step when clients successfully associate with access points but cannot obtain IP addresses. This symptom indicates wireless connectivity is functioning at Layer 2, but IP address assignment is failing at Layer 3. Checking DHCP server status, scope exhaustion, DHCP relay agent configuration, and network connectivity to DHCP servers identifies common issues preventing IP address assignment. DHCP problems frequently cause these symptoms, and verification takes minimal time before considering more disruptive changes.
Option A is incorrect because replacing all wireless access points is an extreme measure unnecessarily expensive and time-consuming when the issue is likely not hardware-related. Since clients successfully associate with APs, the wireless hardware and RF connectivity are functioning. The symptom pattern (association succeeds, IP assignment fails) points to Layer 3 or higher issues, not AP hardware failures. Wholesale hardware replacement should only be considered after exhausting all other troubleshooting steps and identifying actual hardware problems.
Option C is incorrect because increasing wireless transmit power doesn’t address IP address assignment failures. Clients are already successfully associating with access points, indicating adequate signal strength for Layer 2 connectivity. Transmit power adjustments affect RF coverage and signal quality but have no impact on DHCP operations. This action would waste time without addressing the root cause and might create co-channel interference problems degrading overall wireless performance.
Option D is incorrect because changing all wireless channels doesn’t resolve DHCP-related IP address assignment failures. Wireless channel selection affects RF interference and throughput but doesn’t impact DHCP server communication once clients have associated. Since association succeeds, channel quality is adequate for connectivity. Changing channels would disrupt connected clients unnecessarily without addressing the actual problem of failed IP address assignment.
Question 157
A network administrator needs to implement QoS to prioritize voice traffic over a WAN link with limited bandwidth. Which QoS mechanism should be configured to provide strict priority for voice packets?
A) Weighted Fair Queuing (WFQ)
B) Low Latency Queuing (LLQ) with priority queue
C) First-In-First-Out (FIFO) queuing
D) Round-robin scheduling
Answer: B
Explanation:
Low Latency Queuing (LLQ) with priority queue provides strict priority for voice packets, ensuring minimal delay and jitter critical for voice quality. LLQ extends Class-Based Weighted Fair Queuing (CBWFQ) by adding a strict priority queue that services packets before any other queue. Voice traffic placed in the priority queue receives immediate forwarding with guaranteed bandwidth allocation up to the configured priority limit. This prevents voice packets from queuing behind data traffic, maintaining the low delay requirements (typically under 150ms end-to-end) necessary for acceptable voice quality. LLQ includes policing to prevent priority queue starvation of other queues.
Option A is incorrect because Weighted Fair Queuing (WFQ) doesn’t provide strict priority for voice traffic. WFQ allocates bandwidth proportionally based on traffic weights but doesn’t guarantee immediate forwarding. Voice packets might still experience queuing delay behind other flows during congestion. While WFQ provides better fairness than FIFO, it cannot meet the stringent delay requirements of voice traffic requiring priority treatment. WFQ suits general data traffic but lacks the priority scheduling voice applications demand.
Option C is incorrect because First-In-First-Out (FIFO) queuing provides no differentiated treatment, processing all packets in arrival order regardless of traffic type. FIFO offers no prioritization, allowing voice packets to queue behind large data packets causing unacceptable delay and jitter. During congestion, FIFO equally delays all traffic without considering application requirements. Voice traffic requires priority scheduling that FIFO fundamentally cannot provide, making it completely unsuitable for QoS implementations requiring voice prioritization.
Option D is incorrect because round-robin scheduling distributes bandwidth equally across queues in cyclical fashion without providing priority treatment. Round-robin ensures fairness but doesn’t give voice traffic the immediate forwarding it requires. Voice packets would wait their turn in the round-robin cycle, experiencing variable delay depending on other queue depths. This scheduling method cannot meet voice traffic’s strict delay and jitter requirements, making it inappropriate for prioritizing time-sensitive applications.
Question 158
A company is implementing SD-WAN to connect multiple branch offices. Which SD-WAN capability provides the most significant advantage over traditional WAN routing?
A) Support for only MPLS circuits
B) Application-aware routing with dynamic path selection
C) Static routing configurations only
D) Single transport dependency
Answer: B
Explanation:
Application-aware routing with dynamic path selection represents SD-WAN’s most significant advantage over traditional WAN routing by intelligently steering traffic across multiple transport paths based on application requirements and real-time path performance. SD-WAN solutions continuously monitor path characteristics including latency, jitter, packet loss, and availability across all available transports (MPLS, Internet, LTE, etc.). Based on application policies and current path conditions, SD-WAN dynamically selects optimal paths for each application flow, ensuring business-critical applications receive appropriate performance while efficiently utilizing all available bandwidth. This intelligent traffic steering improves application performance, enables transport cost optimization, and provides automatic failover without manual intervention.
Option A is incorrect because SD-WAN’s value proposition specifically includes supporting multiple transport types simultaneously, not limiting to only MPLS circuits. SD-WAN enables organizations to leverage cost-effective Internet broadband alongside or instead of expensive MPLS circuits. Supporting diverse transports (Internet, MPLS, LTE, satellite) provides flexibility, redundancy, and cost optimization that single-transport approaches cannot deliver. Restricting to MPLS only negates SD-WAN’s core benefits of transport independence and hybrid WAN architectures.
Option C is incorrect because static routing configurations represent traditional WAN approaches that SD-WAN specifically improves upon. SD-WAN’s intelligence comes from dynamic routing decisions based on real-time conditions, not static configurations requiring manual changes. Static routing cannot adapt to changing network conditions, transport failures, or varying application requirements. SD-WAN’s dynamic path selection automatically responds to network changes, providing resilience and optimization impossible with static configurations.
Option D is incorrect because single transport dependency is a limitation of traditional WANs that SD-WAN eliminates, not an advantage. SD-WAN’s architecture specifically enables multiple simultaneous transports for redundancy, load balancing, and cost optimization. Single transport creates single points of failure and limits bandwidth options. SD-WAN’s multi-transport capability provides the resilience and flexibility that single transport approaches lack, representing fundamental SD-WAN value rather than a limitation.
Question 159
A network engineer is troubleshooting EIGRP adjacency issues between two routers. The routers are directly connected but the adjacency is not forming. Which configuration mismatch would prevent EIGRP neighbor relationships from establishing?
A) Different router IDs
B) Mismatched K-values
C) Different administrative distances
D) Mismatched router priority values
Answer: B
Explanation:
Mismatched K-values prevent EIGRP neighbor relationships from establishing because K-values define the metric calculation components and must match for routers to become neighbors. EIGRP uses K-values (K1 through K5) to weight different path characteristics (bandwidth, load, delay, reliability, MTU) in metric calculations. Routers with different K-values cannot form adjacencies because they would calculate metrics differently, potentially causing routing inconsistencies and loops. EIGRP explicitly checks K-value consistency during neighbor discovery, refusing adjacencies with mismatched values to ensure all routers in an autonomous system use identical metric calculations.
Option A is incorrect because different router IDs do not prevent EIGRP adjacency formation. Unlike OSPF where router ID conflicts can cause issues, EIGRP router IDs serve primarily for tie-breaking in route selection and loop prevention but don’t affect adjacency establishment. EIGRP routers with different router IDs successfully form adjacencies without problems. Router ID uniqueness is recommended for administrative clarity but isn’t enforced as an adjacency requirement.
Option C is incorrect because administrative distance is a local router configuration that affects route preference in the routing table but doesn’t impact EIGRP neighbor discovery or adjacency formation. Different routers can use different administrative distances for EIGRP without preventing adjacencies. Administrative distance influences which routes enter the routing table when multiple routing protocols advertise the same destination but doesn’t participate in EIGRP’s neighbor establishment process.
Option D is incorrect because EIGRP doesn’t use priority values in neighbor relationships or adjacency formation. Priority is an OSPF concept for designated router election on multi-access networks. EIGRP uses a different neighbor discovery mechanism without priority-based elections. All EIGRP routers on a segment form adjacencies with all other EIGRP neighbors without priority considerations, making priority mismatch irrelevant to EIGRP adjacency issues.
Question 160
An enterprise network requires implementing redundancy for Internet connectivity with automatic failover. Which technology provides the fastest convergence time for automatic failover between primary and backup Internet connections?
A) Static routes with manual failover
B) IP SLA tracking with reliable static routes
C) Policy-based routing without monitoring
D) BGP with long AS-path prepending
Answer: B
Explanation:
IP SLA tracking with reliable static routes provides the fastest convergence time for automatic failover between Internet connections by continuously monitoring reachability and link quality, triggering immediate route changes when thresholds are violated. IP SLA probes actively test connectivity through the primary path, typically using ICMP echo, HTTP GET, or other protocols to verify end-to-end reachability beyond simple interface status. When IP SLA detects failure based on configurable thresholds (timeout, packet loss, latency), tracked static routes instantly become invalid, causing immediate traffic shift to backup paths. This approach typically achieves sub-second failover compared to minutes for routing protocol convergence.
Option A is incorrect because static routes with manual failover require human intervention to detect failures and reconfigure routing, resulting in extended downtime measured in minutes or hours. Manual processes depend on monitoring systems alerting administrators, administrators diagnosing problems, and manual configuration changes taking effect. This approach provides no automatic failover capability, leaving networks vulnerable during the detection and response period. Manual failover is completely unsuitable for production environments requiring high availability.
Option C is incorrect because policy-based routing without monitoring cannot detect path failures and automatically failover. PBR routes traffic based on configured policies but lacks mechanisms to verify path viability or trigger automatic route changes when paths fail. Traffic continues being directed to failed paths until manual intervention occurs. Without active monitoring like IP SLA, PBR cannot provide automatic failover functionality regardless of how policies are configured.
Option D is incorrect because BGP with AS-path prepending is a traffic engineering technique for influencing inbound routing decisions, not an automatic failover mechanism for outbound connectivity. AS-path prepending makes paths appear less desirable to remote peers but doesn’t detect failures or trigger local failover. BGP convergence typically takes tens of seconds to minutes depending on timer settings, significantly slower than IP SLA-based failover. Additionally, prepending affects inbound traffic from remote ASes, not local outbound path selection for Internet connectivity failover.