Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 121:
Which routing protocol uses the DUAL algorithm to calculate the best path to a destination?
A) OSPF
B) EIGRP
C) RIP
D) BGP
Answer: B
Explanation:
EIGRP (Enhanced Interior Gateway Routing Protocol) is the routing protocol that utilizes the DUAL (Diffusing Update Algorithm) to calculate and maintain the best path to destination networks. DUAL is a sophisticated algorithm developed by Cisco that ensures loop-free routing operations and provides rapid convergence when network topology changes occur. This algorithm is one of the key features that distinguishes EIGRP from other routing protocols and contributes to its efficiency in enterprise networks.
The DUAL algorithm works by maintaining a topology table that contains all learned routes from neighboring routers, not just the best routes. For each destination network, DUAL calculates a feasible distance, which is the best metric to reach that destination, and identifies a successor route, which is the primary path currently used for forwarding traffic. Additionally, DUAL can identify feasible successors, which are backup routes that meet specific criteria and can be immediately used if the primary route fails without requiring a full route recalculation.
When a route fails, DUAL first checks if a feasible successor exists in the topology table. If one is available, EIGRP can switch to this backup route instantly, achieving convergence in milliseconds. If no feasible successor exists, DUAL initiates a query process to neighboring routers to find an alternative path. This query process is controlled and loop-free due to the mathematical properties of the DUAL algorithm.
Option A is incorrect because OSPF uses the Dijkstra Shortest Path First algorithm, not DUAL. Option C is incorrect because RIP uses the Bellman-Ford algorithm for route calculation. Option D is incorrect because BGP uses the Best Path Selection algorithm based on various attributes like AS path, local preference, and MED rather than DUAL.
Question 122:
What is the default administrative distance of OSPF?
A) 90
B) 100
C) 110
D) 120
Answer: C
Explanation:
The default administrative distance of OSPF (Open Shortest Path First) is 110. Administrative distance is a numerical value assigned to routing protocols that indicates the trustworthiness or reliability of the routing information source. When a router learns about the same destination network from multiple routing protocols, it uses the administrative distance to determine which route to install in the routing table. The route with the lowest administrative distance is considered the most trustworthy and is preferred for installation in the routing table.
OSPF’s administrative distance of 110 places it in the middle range of routing protocol preferences. This value reflects OSPF’s status as a reliable and widely-deployed link-state routing protocol suitable for enterprise networks. The administrative distance can be manually adjusted by network administrators if specific routing policy requirements necessitate changing the default preference order, though this is relatively uncommon in most network designs.
Understanding administrative distance is crucial for network engineers working with multi-protocol routing environments. For comparison, directly connected routes have an administrative distance of 0, static routes have 1, EIGRP has 90, and RIP has 120. This hierarchy means that if a router learns about the same destination from both EIGRP and OSPF, it will prefer the EIGRP route because 90 is lower than 110.
Option A is incorrect because 90 is the administrative distance for EIGRP internal routes, not OSPF. Option B is incorrect because 100 is the administrative distance for IGRP, an older Cisco routing protocol. Option D is incorrect because 120 is the administrative distance for RIP, not OSPF. The correct answer for OSPF is 110.
Question 123:
Which OSPF network type requires manual neighbor configuration?
A) Broadcast
B) Point-to-point
C) Non-broadcast
D) Point-to-multipoint
Answer: C
Explanation:
The OSPF non-broadcast network type requires manual neighbor configuration because it operates on networks that do not support broadcast or multicast capabilities for automatic neighbor discovery. This network type is typically used on Frame Relay, ATM, and other non-broadcast multi-access (NBMA) networks where the underlying media does not inherently support sending packets to multiple destinations simultaneously. Without broadcast capability, OSPF cannot automatically discover neighboring routers using multicast hello packets, necessitating manual configuration of neighbor relationships.
In non-broadcast networks, the network administrator must explicitly configure each neighbor’s IP address on the OSPF router using the neighbor command. This configuration tells the OSPF process exactly which IP addresses to contact for forming adjacencies. The non-broadcast network type also requires the election of a designated router (DR) and backup designated router (BDR) to minimize the number of adjacencies and optimize routing updates, similar to broadcast networks. Only the DR and BDR form full adjacencies with all other routers, while non-DR routers only form adjacencies with the DR and BDR.
The manual configuration requirement adds administrative overhead but provides precise control over OSPF neighbor relationships in complex NBMA topologies. Network engineers must ensure that all necessary neighbor statements are configured and that the network topology supports full connectivity between routers that need to exchange routing information.
Option A is incorrect because broadcast networks like Ethernet automatically discover neighbors through multicast hello packets. Option B is incorrect because point-to-point networks also automatically discover neighbors without manual configuration. Option D is incorrect because point-to-multipoint networks support automatic neighbor discovery through modified hello packet behavior designed for NBMA environments.
Question 124:
What command is used to verify EIGRP neighbor relationships on a Cisco router?
A) show ip eigrp interfaces
B) show ip eigrp neighbors
C) show ip eigrp topology
D) show ip route eigrp
Answer: B
Explanation:
The command show ip eigrp neighbors is used to verify and display EIGRP neighbor relationships on Cisco routers. This command provides critical information about the current state of EIGRP adjacencies, which are essential for proper routing protocol operation. The output includes details such as neighbor IP addresses, the local interface through which the neighbor is reachable, hold time remaining before the neighbor is declared down, uptime indicating how long the adjacency has been established, smooth round-trip time (SRTT), retry transmission timer (RTO), queue counts, and the sequence number of the last update received.
Verifying EIGRP neighbor relationships is a fundamental troubleshooting step when diagnosing routing issues. If expected neighbors do not appear in the output, it indicates problems such as misconfigured authentication, mismatched autonomous system numbers, interface configuration issues, or network connectivity problems preventing hello packets from being exchanged. The hold time value is particularly important as it shows how long the router will wait before declaring a neighbor unreachable if no hello packets are received.
The command is straightforward to use and provides immediate visibility into the health of EIGRP adjacencies. Network administrators regularly use this command during initial configuration verification, ongoing monitoring, and troubleshooting activities. Understanding the output helps quickly identify whether routing protocol problems stem from neighbor relationship issues or other factors.
Option A is incorrect because show ip eigrp interfaces displays information about interfaces running EIGRP but does not show neighbor relationships. Option C is incorrect because show ip eigrp topology displays the EIGRP topology table containing all learned routes, not specifically neighbor information. Option D is incorrect because show ip route eigrp displays only the EIGRP routes installed in the routing table, not neighbor relationship details.
Question 125:
In OSPF, what is the function of the designated router (DR)?
A) To encrypt routing updates
B) To reduce the number of adjacencies on multi-access networks
C) To provide backup routing in case of link failure
D) To perform route summarization
Answer: B
Explanation:
The designated router (DR) in OSPF serves the critical function of reducing the number of adjacencies required on multi-access networks such as Ethernet segments. Without a DR, every router on a multi-access network would need to form full adjacencies with every other router, resulting in a large number of adjacencies that scales as n(n-1)/2, where n is the number of routers. This would create excessive overhead for link-state advertisement flooding and database synchronization. The DR mechanism solves this scalability problem by centralizing routing update distribution.
On multi-access networks, OSPF routers elect one DR and one backup designated router (BDR) through an election process based on router priority and router ID. All other routers on the segment, called DROTHERs, form adjacencies only with the DR and BDR rather than with each other. When a router needs to send a routing update, it multicasts the update to the DR and BDR using the multicast address 224.0.0.6. The DR then redistributes this information to all other routers on the segment using multicast address 224.0.0.5, ensuring everyone receives the update while minimizing redundant transmissions.
The BDR monitors the DR and maintains the same adjacencies, ready to take over immediately if the DR fails. This design significantly reduces the number of adjacencies from potentially dozens or hundreds down to just two per router (one with DR and one with BDR), greatly improving efficiency and reducing protocol overhead on the network segment.
Option A is incorrect because the DR does not encrypt routing updates; OSPF authentication handles security. Option C is incorrect because backup routing is provided by the OSPF algorithm itself through alternate paths, not the DR function. Option D is incorrect because route summarization is configured at area border routers, not performed by the DR.
Question 126:
Which VLAN range is considered the extended VLAN range on Cisco switches?
A) 1-1005
B) 1006-4094
C) 1-4094
D) 2-1001
Answer: B
Explanation:
The extended VLAN range on Cisco switches encompasses VLAN IDs from 1006 to 4094. This range was introduced to provide additional VLAN capacity beyond the standard VLAN range for large enterprise networks and service provider environments that require more than the original 1005 VLANs available in the standard range. The extended range allows network administrators to scale their VLAN deployments to support significantly larger numbers of broadcast domains and network segments within their infrastructure.
There are important differences between standard and extended range VLANs that network engineers must understand. Extended range VLANs require the switch to be in VTP transparent mode or VTP version 3, as earlier VTP versions do not propagate extended VLAN information. Additionally, extended range VLANs are not stored in the vlan.dat file like standard VLANs; instead, they are saved in the running configuration and must be saved to the startup configuration using the copy running-config startup-config command to persist across reboots.
The extended VLAN range is particularly useful in data center environments, large campus networks, and service provider networks where extensive network segmentation is required. Modern network designs often utilize extended VLANs for various purposes including customer isolation in multi-tenant environments, service segmentation, and supporting large numbers of network applications or departments within an organization.
Option A is incorrect because 1-1005 represents the standard VLAN range, not the extended range. Option C is incorrect because 1-4094 includes both standard and extended ranges combined. Option D is incorrect because 2-1001 represents a subset of the standard range, excluding VLAN 1 and some higher standard VLANs. The correct extended VLAN range is specifically 1006-4094.
Question 127:
What is the purpose of the STP PortFast feature?
A) To increase bandwidth on trunk ports
B) To immediately transition access ports to forwarding state
C) To prevent routing loops
D) To enable port security
Answer: B
Explanation:
The STP PortFast feature is designed to immediately transition access ports to the forwarding state, bypassing the normal Spanning Tree Protocol listening and learning states that typically take 30 seconds combined (15 seconds each). This feature significantly improves the user experience on ports connected to end devices such as workstations, servers, printers, and IP phones by allowing network connectivity to be established almost instantly when the device is powered on or connected, rather than waiting through the standard STP convergence delay.
Without PortFast, when an end device connects to a switch port, STP requires the port to progress through the blocking, listening, and learning states before reaching the forwarding state. During these initial states, no user data can pass through the port, resulting in noticeable delays for tasks like obtaining DHCP addresses, accessing network resources, or completing system boot processes that require network connectivity. PortFast eliminates this delay for ports where loops cannot occur because only a single end device is connected.
PortFast should only be configured on access ports connected to end devices, never on ports connected to other switches or bridges, as enabling PortFast on interconnected switch ports could create temporary bridging loops before STP detects the topology change. When properly configured, PortFast includes a safety mechanism that automatically disables the feature if a BPDU is received on the port, indicating that another switch has been connected. Additionally, PortFast enables BPDU Guard functionality, which can be configured to shut down ports receiving BPDUs for additional protection.
Option A is incorrect because PortFast does not affect bandwidth or trunk port operation. Option C is incorrect because preventing routing loops is handled by routing protocols, while STP prevents bridging loops. Option D is incorrect because port security is a separate feature that restricts MAC addresses allowed on a port.
Question 128:
Which protocol is used by HSRP for router redundancy?
A) TCP
B) UDP
C) ICMP
D) IP protocol 112
Answer: B
Explanation:
HSRP (Hot Standby Router Protocol) uses UDP (User Datagram Protocol) for communication between routers participating in an HSRP group. Specifically, HSRP sends hello messages using UDP port 1985 to coordinate redundancy operations and maintain the active and standby router roles within the group. The use of UDP is appropriate for HSRP because the protocol requires fast, periodic updates rather than the reliability overhead associated with TCP connections. If an HSRP hello message is lost, the next hello will arrive shortly, making UDP’s connectionless nature more suitable than TCP’s connection-oriented approach.
HSRP hello messages are exchanged between routers at regular intervals, typically every 3 seconds by default, to inform other group members of their operational status and priority. These messages contain information including HSRP version, operation code, state of the router, priority value, authentication data, and the virtual IP address being protected. The active router sends hellos to announce its continued operation, while standby and other routers listen to these hellos to monitor the active router’s health.
When the standby router stops receiving hello messages from the active router for a specified hold time (default 10 seconds), it assumes the active router has failed and transitions to become the new active router, taking over the virtual IP and MAC addresses. This transition happens seamlessly from the perspective of end users because their default gateway configuration points to the virtual IP address, which remains constant regardless of which physical router is actively forwarding traffic.
Option A is incorrect because HSRP does not use TCP; it requires the lightweight, connectionless nature of UDP. Option C is incorrect because ICMP is used for network diagnostic functions, not HSRP communication. Option D is incorrect because IP protocol 112 is used by VRRP, a different redundancy protocol, not HSRP.
Question 129:
What is the maximum number of paths EIGRP can load balance across by default?
A) 2
B) 4
C) 6
D) 16
Answer: B
Explanation:
EIGRP can load balance across a maximum of 4 equal-cost paths by default. This default setting allows EIGRP to utilize multiple paths to the same destination network when those paths have identical metrics, distributing traffic across the available links to improve bandwidth utilization and provide redundancy. Load balancing is one of EIGRP’s key features that enhances network performance and resilience by preventing a single path from becoming a bottleneck while other viable paths remain underutilized.
The load balancing behavior in EIGRP works by installing multiple equal-cost routes for the same destination into the routing table. When forwarding packets, the router distributes traffic across these multiple paths using either per-packet or per-destination load balancing, depending on the switching method configured. Per-destination load balancing, which is more common in modern networks, sends all packets for a specific destination host across the same path, maintaining packet ordering for individual flows while still distributing different destination traffic across multiple paths.
Network administrators can modify the maximum number of paths for load balancing using the maximum-paths command under the EIGRP routing process configuration. The value can be adjusted from 1 (effectively disabling load balancing) up to 32 paths in recent IOS versions, though supporting more than a few paths simultaneously is uncommon in most network designs. Additionally, EIGRP supports unequal-cost load balancing through the variance command, which allows load balancing across paths with different metrics within a specified ratio.
Option A is incorrect because 2 is not the default maximum; EIGRP defaults to 4 paths. Option C is incorrect because 6 is not a standard default value for any major routing protocol. Option D is incorrect because 16 represents the maximum possible value in some older IOS versions, not the default value. The correct default is 4 equal-cost paths.
Question 130:
Which command configures an interface as a DHCP client on a Cisco router?
A) ip address dhcp
B) ip dhcp client
C) dhcp enable
D) ip address dynamic
Answer: A
Explanation:
The command ip address dhcp configures a Cisco router interface to obtain its IP address dynamically from a DHCP server, functioning as a DHCP client. This configuration is particularly useful in scenarios where routers connect to service provider networks, cable modem connections, or any environment where dynamic addressing is preferred or required. When this command is applied to an interface, the router generates DHCP discover messages to locate available DHCP servers and negotiates to receive an IP address, subnet mask, default gateway, and other network configuration parameters.
The DHCP client process on a Cisco router follows the standard DHCP four-way handshake: discover, offer, request, and acknowledge. After receiving an IP address, the router periodically attempts to renew the lease before it expires to maintain continuous network connectivity. The router stores the DHCP-assigned address in its running configuration, though the assignment is temporary and may change if the router reboots and receives a different address from the DHCP server.
This functionality is commonly used in small office/home office environments where Cisco routers connect to ISP networks via cable or DSL modems that provide dynamic addressing. It is also useful in lab environments for rapid configuration and in scenarios where IP address management is centralized through DHCP infrastructure. Network administrators can verify the DHCP-assigned address using show ip interface brief or show dhcp lease commands to view lease information including the assigned address, lease duration, and DHCP server address.
Option B is incorrect because ip dhcp client is not a valid Cisco IOS command for configuring DHCP client functionality. Option C is incorrect because dhcp enable is not the correct syntax; the proper command explicitly references IP addressing. Option D is incorrect because ip address dynamic is not valid Cisco IOS syntax for DHCP client configuration.
Question 131:
What is the default hello interval for OSPF on broadcast networks?
A) 5 seconds
B) 10 seconds
C) 30 seconds
D) 40 seconds
Answer: B
Explanation:
The default hello interval for OSPF on broadcast networks is 10 seconds. The hello interval determines how frequently OSPF routers send hello packets to discover and maintain neighbor relationships on a network segment. These periodic hello packets serve multiple critical functions including neighbor discovery, keepalive mechanism to verify that neighbors remain reachable, and communication of important OSPF parameters that must match between neighbors for successful adjacency formation.
OSPF hello packets contain essential information such as router ID, area ID, authentication data, designated router and backup designated router information, router priority, and a list of neighbors the router has already discovered. The hello interval is transmitted within the hello packet, and neighboring routers must be configured with matching hello intervals to form adjacencies. If hello intervals do not match between routers, they will not establish neighbor relationships, preventing routing information exchange.
The dead interval, which is four times the hello interval by default (40 seconds on broadcast networks), specifies how long a router waits without receiving hello packets from a neighbor before declaring that neighbor down and removing it from the neighbor table. This relationship between hello and dead intervals provides a balance between rapid failure detection and stability, preventing unnecessary neighbor flapping due to temporary network congestion or brief connectivity issues.
Different network types have different default hello intervals. While broadcast and point-to-point networks use 10-second hello intervals, non-broadcast multi-access (NBMA) networks use 30-second intervals due to their typically higher latency characteristics. Network administrators can manually adjust hello and dead intervals to tune OSPF convergence behavior for specific network requirements, though changes must be consistent across all routers on the segment.
Option A is incorrect because 5 seconds is not an OSPF default, though it is used by EIGRP. Option C is incorrect because 30 seconds is the default hello interval for NBMA networks, not broadcast networks. Option D is incorrect because 40 seconds is the default dead interval, not hello interval.
Question 132:
Which EtherChannel protocol is Cisco proprietary?
A) LACP
B) PAgP
C) UDLD
D) VTP
Answer: B
Explanation:
PAgP (Port Aggregation Protocol) is the Cisco proprietary protocol used for automatically negotiating and forming EtherChannel link aggregation bundles between Cisco switches. PAgP provides dynamic configuration capabilities that allow switches to automatically detect compatible ports and create EtherChannel groups without requiring complete manual configuration on both ends of the link. This automation reduces configuration errors and simplifies the deployment of link aggregation in Cisco-only network environments.
PAgP operates by exchanging packets between switches to negotiate EtherChannel formation. The protocol supports several modes including desirable mode, where the port actively attempts to form an EtherChannel by sending PAgP packets and responding to received packets, auto mode, where the port passively waits for PAgP packets from the other side before forming an EtherChannel, and on mode, which forces EtherChannel formation without negotiation. For successful EtherChannel formation using PAgP, at least one side must be configured in desirable mode to initiate the negotiation.
PAgP verifies that ports have compatible configurations before bundling them into an EtherChannel. The protocol checks parameters such as port speed, duplex settings, VLAN configurations, and spanning tree settings. If incompatible configurations are detected, PAgP prevents EtherChannel formation, avoiding potential network issues. While PAgP provides robust functionality in Cisco environments, its proprietary nature means it cannot be used when connecting Cisco switches to equipment from other vendors, necessitating the use of the standards-based LACP protocol in mixed-vendor environments.
Option A is incorrect because LACP (Link Aggregation Control Protocol) is an IEEE 802.3ad standard protocol, not Cisco proprietary. Option C is incorrect because UDLD (UniDirectional Link Detection) is a Layer 2 protocol for detecting unidirectional links, not related to EtherChannel negotiation. Option D is incorrect because VTP (VLAN Trunking Protocol) manages VLAN configuration propagation, not EtherChannel formation.
Question 133:
What is the purpose of the split-horizon rule in distance-vector routing protocols?
A) To reduce routing table size
B) To prevent routing loops
C) To increase convergence speed
D) To enable load balancing
Answer: B
Explanation:
The split-horizon rule is a fundamental loop-prevention mechanism used in distance-vector routing protocols such as RIP and EIGRP. This rule prevents routing loops by prohibiting a router from advertising a route back through the same interface from which it learned that route. The logic behind split-horizon is straightforward: if Router A learned about a network from Router B, there is no reason for Router A to advertise that network back to Router B, as Router B already knows about it and advertising it back could create confusing or incorrect routing information.
Without split-horizon, distance-vector protocols would be highly susceptible to routing loops and counting-to-infinity problems. Consider a scenario where Router A advertises a network to Router B. Without split-horizon, Router B would advertise that same network back to Router A, possibly with an increased metric. Router A might then believe that Router B offers an alternative path to that network and update its routing table accordingly, creating a loop where traffic bounces between the two routers indefinitely or until hop count limits are reached.
Split-horizon significantly improves network stability in distance-vector environments. However, there are situations where split-horizon must be disabled or modified, particularly in hub-and-spoke network topologies using Frame Relay or other NBMA networks. In these scenarios, the hub router must advertise routes received on one logical interface back out through the same physical interface to reach other spoke routers. To address this, protocols implement split-horizon with poison reverse, where routes are advertised back through the interface they were learned on but with an infinite metric, explicitly marking them as unreachable.
Option A is incorrect because split-horizon does not reduce routing table size; it affects routing advertisement behavior. Option C is incorrect because split-horizon actually may slow convergence slightly by limiting update propagation. Option D is incorrect because load balancing is achieved through other mechanisms, not split-horizon.
Question 134:
Which VLAN is the default native VLAN for 802.1Q trunk links?
A) VLAN 0
B) VLAN 1
C) VLAN 10
D) VLAN 1005
Answer: B
Explanation:
VLAN 1 is the default native VLAN for 802.1Q trunk links on Cisco switches. The native VLAN is a special VLAN designation used on trunk ports to handle untagged traffic. When frames belonging to the native VLAN traverse a trunk link, they are sent without an 802.1Q VLAN tag, allowing backward compatibility with devices that do not understand VLAN tagging. All other VLANs on the trunk are tagged with their respective VLAN IDs encapsulated in the 802.1Q header.
The native VLAN serves several important purposes in network operations. It handles control plane traffic such as CDP, VTP, and DTP messages that are transmitted untagged by default. It also provides a mechanism for interconnecting with older equipment that predates VLAN technology or devices that do not support 802.1Q tagging. Additionally, the native VLAN must match on both ends of a trunk link for proper operation, and mismatches can lead to connectivity issues and security vulnerabilities.
Using VLAN 1 as the native VLAN presents security concerns because it is well-known and predictable. Best practice recommendations suggest changing the native VLAN from VLAN 1 to an unused VLAN to mitigate potential security attacks such as VLAN hopping. Additionally, it is recommended to prune the native VLAN from all trunk links where it is not actively needed and to avoid using the native VLAN for regular user or data traffic. Some organizations configure all ports to tag all VLANs including the native VLAN for enhanced security.
Network administrators configure the native VLAN on trunk ports using the switchport trunk native vlan command. When troubleshooting connectivity issues on trunk links, verifying that native VLAN configurations match on both ends is an essential step.
Option A is incorrect because VLAN 0 does not exist in standard VLAN numbering. Option C is incorrect because VLAN 10 is not a default value and would require explicit configuration. Option D is incorrect because VLAN 1005 is at the upper end of the standard VLAN range.
Question 135:
What is the purpose of the EIGRP feasible successor?
A) To serve as the primary routing path
B) To provide a backup route that meets feasibility condition
C) To authenticate routing updates
D) To summarize routing information
Answer: B
Explanation:
The EIGRP feasible successor is a backup route that meets the feasibility condition and can be used immediately if the primary route fails, providing rapid convergence without requiring a query process. This concept is unique to EIGRP and is made possible by the DUAL algorithm, which maintains a topology table containing not just the best routes but also alternative paths that could be used for forwarding. Feasible successors are pre-computed backup routes that are guaranteed to be loop-free based on mathematical conditions evaluated during route selection.
The feasibility condition that determines whether a route can be a feasible successor states that a neighbor’s advertised distance to a destination must be less than the local router’s feasible distance to that same destination. In simpler terms, the neighbor must be closer to the destination than the local router’s current best path. This condition ensures that using the backup route will not create a routing loop. When the primary path (successor) fails, EIGRP can immediately install a feasible successor from the topology table into the routing table and begin using it for forwarding, achieving convergence in milliseconds.
If no feasible successor exists when the primary route fails, EIGRP must enter an active state for that destination and send query messages to all neighbors to discover alternative paths. This query process takes longer than simply switching to a feasible successor and can impact convergence time. Therefore, network designs that provide multiple paths and allow EIGRP to identify feasible successors achieve better resiliency and faster recovery from link failures.
Network administrators can view feasible successors in the EIGRP topology table using the show ip eigrp topology command. Routes marked as feasible successors indicate available backup paths that provide enhanced reliability.
Option A is incorrect because the successor, not feasible successor, serves as the primary routing path. Option C is incorrect because authentication is handled separately through EIGRP authentication configuration. Option D is incorrect because route summarization is a different EIGRP feature.
Question 136:
Which command enables IPv6 routing on a Cisco router?
A) ipv6 enable
B) ipv6 unicast-routing
C) ipv6 routing enable
D) ip routing ipv6
Answer: B
Explanation:
The command ipv6 unicast-routing is used to enable IPv6 routing functionality on Cisco routers. This global configuration command activates the IPv6 routing process, allowing the router to forward IPv6 packets between interfaces and participate in IPv6 routing protocols. Without this command, the router can have IPv6 addresses configured on its interfaces and communicate using IPv6 on locally connected networks, but it will not route IPv6 traffic between different networks or run IPv6 routing protocols like OSPFv3, EIGRP for IPv6, or BGP for IPv6.
When ipv6 unicast-routing is enabled, several important changes occur in the router’s operation. The router begins forwarding IPv6 packets between interfaces based on its IPv6 routing table, starts sending IPv6 Router Advertisement messages on interfaces with IPv6 addresses configured, enabling Stateless Address Autoconfiguration (SLAAC) for connected devices, and allows configuration and operation of IPv6 routing protocols. Additionally, the router can now perform functions such as IPv6 access control lists, Network Address Translation for IPv6 (NAT64), and other advanced IPv6 features.
This command is typically one of the first configuration steps when implementing IPv6 in a network, executed at the global configuration level before configuring IPv6 addresses on specific interfaces or enabling IPv6 routing protocols. After enabling IPv6 routing, administrators proceed to configure IPv6 addresses on interfaces using commands like ipv6 address, configure IPv6 routing protocols, and implement IPv6 security policies. The command is persistent across reboots when the configuration is saved and can be disabled using the no ipv6 unicast-routing command if IPv6 routing needs to be deactivated.
Option A is incorrect because ipv6 enable is an interface-level command that enables IPv6 on a specific interface without enabling routing. Option C is incorrect because ipv6 routing enable is not valid Cisco IOS syntax. Option D is incorrect because ip routing ipv6 is not a valid command format.
Question 137:
What is the function of the BPDU Guard feature in STP?
A) To prevent unauthorized switches from connecting to PortFast-enabled ports
B) To increase spanning tree priority
C) To accelerate STP convergence
D) To enable VLAN pruning
Answer: A
Explanation:
BPDU Guard is a Spanning Tree Protocol security feature that protects the network by preventing unauthorized switches or bridging devices from connecting to ports that are configured with PortFast. When BPDU Guard is enabled on a port, the switch monitors for incoming Bridge Protocol Data Units. If a BPDU is received on a BPDU Guard-enabled port, the switch immediately places that port into an error-disabled state (err-disabled), effectively shutting down the port and preventing any traffic from passing through it. This protective action prevents potential spanning tree topology disruptions and security vulnerabilities.
The primary use case for BPDU Guard is on access layer ports connected to end-user devices such as workstations, printers, IP phones, and servers. These ports typically have PortFast enabled to provide immediate connectivity without spanning tree delays. Since end devices should never send BPDUs, receiving a BPDU on such a port indicates that either someone has mistakenly or maliciously connected a switch or bridge, or there is a misconfiguration. BPDU Guard ensures that such connections are immediately detected and neutralized before they can cause spanning tree recalculations or create bridging loops.
When a port enters the err-disabled state due to BPDU Guard, it remains disabled until an administrator manually intervenes. The administrator must identify and remove the source of BPDUs, verify proper network connectivity, and then re-enable the port using the shutdown followed by no shutdown commands. Alternatively, the err-disable recovery feature can be configured to automatically attempt to recover err-disabled ports after a specified timeout period, though this should be used cautiously to avoid repeatedly enabling problematic connections.
BPDU Guard can be configured globally for all PortFast-enabled ports using spanning-tree portfast bpduguard default or per-interface using spanning-tree bpduguard enable. Best practice recommends enabling BPDU Guard on all access ports to enhance network security.
Option B is incorrect because BPDU Guard does not affect spanning tree priority. Option C is incorrect because accelerating convergence is a function of features like UplinkFast or BackboneFast, not BPDU Guard. Option D is incorrect because VLAN pruning is a separate VTP feature unrelated to BPDU Guard.
Question 138:
Which OSPF LSA type describes external routes redistributed into OSPF?
A) Type 1 LSA
B) Type 2 LSA
C) Type 3 LSA
D) Type 5 LSA
Answer: D
Explanation:
Type 5 LSAs, also known as Autonomous System External LSAs, are used in OSPF to describe external routes that have been redistributed into the OSPF domain from other routing protocols or static routes. These LSAs are generated by Autonomous System Boundary Routers (ASBRs), which are OSPF routers that have connections to external routing domains and perform route redistribution. Type 5 LSAs are flooded throughout the entire OSPF autonomous system, except into stub areas and totally stubby areas, making external routing information available to all routers in the OSPF domain.
Type 5 LSAs contain critical information about external destinations including the network address and subnet mask of the external route, the metric or cost to reach the external destination, the external metric type (E1 or E2), the forwarding address that should be used to reach the external network, and an external route tag that can be used for route filtering or policy implementation. The distinction between E1 and E2 external metrics is particularly important. E2 metrics (the default) represent only the external cost advertised by the ASBR and do not include the internal OSPF cost to reach the ASBR. E1 metrics include both the external cost and the internal OSPF cost, providing a more accurate total path cost.
When OSPF routers receive Type 5 LSAs, they install the external routes into their routing tables with the designation of O E1 or O E2, depending on the external metric type. The ASBR that originates Type 5 LSAs can be identified through Type 4 LSAs, which are generated by Area Border Routers to advertise the location of ASBRs to routers in other areas. This two-level advertisement system allows OSPF routers to understand both what external routes are available and how to reach the ASBRs that provide access to those external destinations.
Option A is incorrect because Type 1 LSAs are Router LSAs that describe router links within an area. Option B is incorrect because Type 2 LSAs are Network LSAs generated by designated routers on multi-access networks. Option C is incorrect because Type 3 LSAs are Summary LSAs that describe inter-area routes.
Question 139:
What is the default encapsulation type for Frame Relay on Cisco routers?
A) IETF
B) Cisco
C) PPP
D) HDLC
Answer: B
Explanation:
The default Frame Relay encapsulation type on Cisco routers is Cisco proprietary encapsulation. This encapsulation format is optimized for Cisco-to-Cisco Frame Relay connections and provides efficient frame formatting for various protocols transported over Frame Relay networks. When Frame Relay is configured on a Cisco router interface without specifying an encapsulation type, the router automatically uses Cisco encapsulation, which works well in homogeneous Cisco network environments but may cause compatibility issues when connecting to non-Cisco equipment.
Cisco Frame Relay encapsulation uses a proprietary frame format that includes a 4-byte header containing protocol identification information. This encapsulation supports multiple network layer protocols including IP, IPX, AppleTalk, and others, allowing the Frame Relay network to carry diverse traffic types. The Cisco encapsulation is configured using the command encapsulation frame-relay on the interface, or more explicitly with encapsulation frame-relay cisco, though the cisco keyword is optional since it represents the default behavior.
When connecting Cisco routers to Frame Relay equipment from other vendors or when interoperability is required, network administrators must change the encapsulation to IETF (Internet Engineering Task Force) standard Frame Relay encapsulation. IETF encapsulation follows RFC 1490/2427 standards and ensures compatibility across multi-vendor environments. The configuration command encapsulation frame-relay ietf changes the encapsulation type to the standards-based format. Additionally, encapsulation type can be specified per virtual circuit using the frame-relay interface-dlci command with the ietf keyword, allowing mixed encapsulation types on a single physical interface.
Understanding Frame Relay encapsulation types is important for troubleshooting connectivity issues in Frame Relay networks. If encapsulation types do not match between connected routers, Layer 3 protocols will fail to communicate properly even though the Frame Relay virtual circuit appears to be active at Layer 2.
Option A is incorrect because IETF is an available option but not the default. Option C is incorrect because PPP is a different WAN protocol, not a Frame Relay encapsulation type. Option D is incorrect because HDLC is another WAN protocol used on serial links, not a Frame Relay encapsulation.
Question 140:
Which QoS mechanism is used to limit bandwidth consumption for specific traffic types?
A) Priority queuing
B) Traffic shaping
C) Traffic policing
D) WRED
Answer: C
Explanation:
Traffic policing is the QoS mechanism specifically designed to limit bandwidth consumption for specific traffic types by enforcing maximum rate limits and dropping or marking packets that exceed configured thresholds. Policing provides hard enforcement of bandwidth limits, immediately discarding excess traffic or remarking it to a lower priority class when traffic rates exceed the specified committed information rate (CIR). This makes policing ideal for enforcing service level agreements at network boundaries and preventing any single traffic class from consuming more than its allocated bandwidth share.
Traffic policing operates using a token bucket algorithm that defines both the rate at which tokens are added to the bucket (representing the allowed traffic rate) and the bucket depth (representing burst tolerance). As packets arrive, tokens are removed from the bucket. If sufficient tokens exist, packets are transmitted or marked as conforming. When the bucket is empty, arriving packets are considered exceeding and are either dropped immediately or marked with a different priority designation for potential dropping later if congestion occurs. Policing can be single-rate or dual-rate, with dual-rate policing supporting both a CIR and a peak information rate (PIR) for more granular traffic control.
The key characteristic of policing is that it does not buffer or delay excess traffic; it takes immediate action when thresholds are exceeded. This creates a potentially aggressive behavior that can impact TCP-based applications, as dropped packets trigger TCP retransmissions and congestion control mechanisms. Despite this, policing is widely used at service provider network edges to enforce customer bandwidth commitments and at enterprise network perimeters to control bandwidth usage for specific application categories.
Policing is configured using Modular QoS CLI (MQC) with class-map and policy-map structures, specifying the traffic classes to police and the rate limits to enforce. Common actions include drop for exceeding traffic, transmit for conforming traffic, and set for marking packets with different DSCP or precedence values.
Option A is incorrect because priority queuing provides preferential treatment and low latency for specific traffic rather than limiting bandwidth. Option B is incorrect because traffic shaping buffers excess traffic rather than dropping it, smoothing traffic flows. Option D is incorrect because WRED provides congestion avoidance through selective packet dropping, not bandwidth limiting.