Click here to access our full set of Cisco 350-601 exam dumps and practice tests.
Question 81:
A router running OSPF in multiple areas shows a neighbor stuck in “2-way” state. Which is the most likely cause?
A) Mismatched OSPF area numbers
B) Authentication mismatch
C) Incorrect hello/dead timers
D) Any of the above
Answer: D) Any of the above
Explanation:
OSPF neighbors can remain in the 2-way state due to mismatched parameters, including area numbers, authentication keys, or hello/dead timers. The 2-way state indicates that bidirectional communication is detected, but adjacency has not fully formed. Correcting any mismatch resolves the issue. In OSPF, neighbor relationships go through several states before reaching full adjacency. The 2-way state indicates that a router has detected bidirectional communication with its neighbor but has not yet formed a full adjacency. This state is typical of multi-access networks, where routers exchange Hello packets to establish neighbor relationships. However, if neighbors remain stuck in the 2-way state, it usually indicates a configuration mismatch preventing full adjacency. Common causes include mismatched OSPF area numbers, where routers belong to different OSPF areas and therefore cannot exchange LSAs; authentication mismatches, where OSPF authentication keys or methods differ between routers, preventing them from recognizing each other as legitimate neighbors; and incorrect hello or dead intervals, where timers on one router do not match the neighbor’s configuration, causing the router to fail to maintain the neighbor state.
Any of these misconfigurations can prevent OSPF from progressing beyond the 2-way state. To resolve the issue, administrators must verify that all OSPF parameters—area numbers, authentication, and timers—match on both sides of the link. Once corrected, routers can move from the 2-way state to the Full state, forming a complete adjacency, allowing LSAs to be exchanged and the routing table to be properly populated. Monitoring neighbor states with commands like show ip ospf neighbor helps identify and troubleshoot such mismatches efficiently. Proper alignment ensures stable OSPF operation and reliable routing throughout the network.
Question 82:
You configure EIGRP and notice that routes with higher metrics are not being used. Which command allows EIGRP to perform unequal-cost load balancing?
A) ip route
B) variance
C) maximum-paths
D) redistribute
Answer: B) variance
Explanation:
The variance command allows EIGRP to use routes that are within a certain multiple of the best path’s metric, enabling unequal-cost load balancing. Maximum-paths controls the number of equal-cost paths, and redistribute is for route redistribution. In EIGRP, load balancing can be performed not only over equal-cost paths but also over unequal-cost paths using the variance command. EIGRP calculates the best path to a destination, known as the successor, based on its composite metric, which considers bandwidth, delay, load, and reliability. Normally, only routes with the same metric as the successor are used for forwarding, but by applying the variance command, network administrators can include additional feasible routes whose metrics are within a specified multiple of the best path’s metric. For example, setting variance 2 allows EIGRP to use routes with metrics up to twice the best path’s metric for load balancing, effectively increasing path utilization and redundancy.
Other EIGRP commands serve different purposes. The maximum-paths command limits the number of equal-cost paths that EIGRP can install in the routing table, but does not affect unequal-cost paths. The redistribute command is used to inject routes from other protocols into EIGRP, enabling route sharing between different routing domains, but it does not influence load-balancing behavior. The IP route command is used for configuring static routes and is unrelated to EIGRP’s dynamic path selection.
By using variance effectively, network engineers can optimize bandwidth usage across multiple paths, improve redundancy, and achieve more efficient traffic distribution in complex networks, while controlling which paths participate in forwarding based on EIGRP metrics.
Question 83:
Which type of NAT would you use to translate multiple private IPs to a single public IP while maintaining multiple sessions?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Port Address Translation (PAT) allows many-to-one IP translation, using different source ports to maintain multiple sessions. Static NAT is one-to-one, dynamic NAT uses a pool, and NAT64 translates IPv6 to IPv4.Network Address Translation (NAT) is a fundamental technique in networking that allows devices in a private IP space to communicate with external networks. There are several NAT types, each serving a different purpose: Static NAT, Dynamic NAT, Port Address Translation (PAT), and NAT64.
Port Address Translation (PAT), also called NAT overload, is a many-to-one translation method. It allows multiple internal devices with private IP addresses to share a single public IP address when communicating with external networks. PAT differentiates sessions using source port numbers, ensuring that replies from external networks are correctly routed back to the originating device. This is widely used in enterprise networks and home routers because it conserves public IP addresses while enabling many hosts to access the internet simultaneously.
Static NAT provides a one-to-one mapping between a private IP and a public IP. This is useful when an internal device, such as a web server, needs a consistent external address for inbound connections. Unlike PAT, each private IP requires a dedicated public IP, which can be inefficient for large networks.
Dynamic NAT maps private IP addresses to a pool of public IPs. When an internal host initiates traffic, it receives a public IP from the pool, which is released when the session ends. This method is one-to-one but allows sharing of limited public addresses among multiple devices over time.
NAT64 enables communication between IPv6 and IPv4 networks. IPv6 hosts send packets to IPv4 servers, and NAT64 translates addresses and headers between the two protocols. This is essential in environments where IPv6 is deployed internally but legacy IPv4 services still exist.
In summary, PAT maximizes the use of a single public IP for multiple devices, static NAT provides a fixed mapping, dynamic NAT shares a pool of public IPs dynamically, and NAT64 bridges IPv6 and IPv4 connectivity. Each method addresses specific network requirements, and understanding the differences is crucial for proper NAT deployment.
Question 84:
A network engineer notices OSPF external routes are not visible in a stub area. Why?
A) Stub areas block Type 5 LSAs
B) Stub areas block Type 3 LSAs
C) Stub areas allow all external routes
D) OSPF does not support external routes
Answer: A) Stub areas block Type 5 LSAs
Explanation:
OSPF stub areas prevent flooding of Type 5 LSAs (external routes) to reduce the routing table size. Only summary routes (Type 3) from ABRs are allowed, and internal routes remain reachable. In OSPF (Open Shortest Path First), stub areas are a type of area designed to reduce the size of the routing table and minimize the processing overhead on routers within the area. Stub areas achieve this by restricting the types of LSAs (Link-State Advertisements) that are flooded into the area.
The key characteristic of a stub area is that it blocks Type 5 LSAs, which are external LSAs used to advertise routes from other routing protocols or external networks into the OSPF domain. By preventing Type 5 LSAs from entering the stub area, routers inside the area do not need to maintain a large number of external routes, which conserves memory and CPU resources. Instead, stub areas rely on a default route that the Area Border Router (ABR) provides to reach external networks. This ensures that all destinations outside the OSPF domain are reachable without requiring detailed knowledge of every external route.
Importantly, stub areas still allow Type 3 LSAs, which are summary LSAs generated by ABRs to advertise networks from other OSPF areas. This ensures that internal OSPF routes between areas are reachable while keeping external routing information limited. Internal routes within the stub area, represented by Type 1 (Router LSAs) and Type 2 (Network LSAs), remain fully functional and reachable.
OSPF does support external routes overall, but the stub area configuration specifically prevents external Type 5 LSAs from being flooded into the area. This contrasts with normal OSPF areas, where both internal and external LSAs are flooded, allowing routers to have complete routing information.
By using stub areas, network administrators can optimize OSPF performance in areas with many routers or limited resources, reduce the size of routing tables, and simplify routing computations, while still maintaining reachability to external networks via a default route. This makes stub areas particularly useful in branch offices or remote network segments.
Question 85:
In SD-WAN, which device is responsible for orchestrating initial device authentication and providing overlay connectivity?
A) vManage
B) vSmart
C) vBond
D) vEdge
Answer: C) vBond
Explanation:
vBond handles zero-touch provisioning (ZTP) and initial authentication of devices joining the SD-WAN overlay. After vBond authentication, devices establish control-plane connections with vSmart and receive policies from vManage.In Cisco SD-WAN architecture, the vBond orchestrator plays a critical role in the initial onboarding and authentication of devices joining the overlay network. vBond is responsible for Zero-Touch Provisioning (ZTP), which allows newly deployed vEdge devices to automatically connect to the SD-WAN network without manual configuration. When a new device powers on, it contacts vBond to authenticate itself using certificates, ensuring that only authorized devices can join the overlay.
Once authentication is successful, vBond facilitates the establishment of control-plane connections between the vEdge devices and vSmart controllers, which handle routing decisions and distribute data-plane policies. vBond ensures that these connections are secure by enabling certificate-based authentication and exchanging the necessary information to build encrypted tunnels.
After the control-plane connectivity is established, devices can communicate with vManage, the centralized management system responsible for policy creation, configuration deployment, and network monitoring. vManage provides administrators with a GUI to define policies for traffic prioritization, security, and application-aware routing, which are then distributed to vEdge devices through the vSmart controllers.
In summary, vBond is the first point of contact for new SD-WAN devices, enabling secure onboarding and establishing the foundation for a functional overlay network. Without vBond, devices could not join the SD-WAN fabric automatically, making it a crucial component alongside vSmart, vManage, and vEdge in maintaining secure, scalable, and manageable SD-WAN deployments.
Question 86:
Which BGP attribute determines the preferred path to reach an external destination across multiple neighboring ASes?
A) Weight
B) LOCAL_PREF
C) AS_PATH
D) MED
Answer: D) MED
Explanation:
MED (Multi-Exit Discriminator) is used between autonomous systems to suggest preferred entry points. Lower MED values are preferred. Weight and LOCAL_PREF are internal, and AS_PATH helps avoid loops. In BGP (Border Gateway Protocol), the Multi-Exit Discriminator (MED) is an important path attribute used to influence routing decisions between different autonomous systems (ASes). MED allows an AS to suggest preferred entry points for inbound traffic from neighboring ASes. Essentially, it communicates to external neighbors which path into the AS is more desirable. Lower MED values are preferred, meaning that routes with smaller MED values are considered better when selecting among multiple exit points to reach a particular destination. This helps optimize traffic flow and manage inter-AS routing efficiently.
Other BGP attributes play different roles in path selection. Weight is a Cisco-specific attribute that is locally significant to the router. Higher weight values are preferred, and it is always evaluated first in the BGP path selection process. Unlike MED, weight does not get propagated to other routers and only affects routing decisions on the local device.
LOCAL_PREF (Local Preference) is another attribute used within an autonomous system to influence outbound traffic. Higher LOCAL_PREF values are preferred, allowing network administrators to control which exit point their traffic uses to leave the AS. Unlike MED, LOCAL_PREF is shared with all BGP routers within the same AS, making it a powerful tool for internal traffic engineering.
AS_PATH is a path attribute that lists all the ASes a route has traversed. Its primary function is to prevent routing loops by ensuring that a route is not accepted if the local AS already appears in the path. It also serves as a tie-breaker during BGP path selection, as shorter AS_PATHs are generally preferred when other attributes are equal.
In summary, MED specifically affects inter-AS routing by suggesting the preferred entry point for external neighbors, with lower values being more favorable. Weight and LOCAL_PREF control routing preferences within an AS, while AS_PATH ensures loop prevention and aids path selection. Together, these attributes provide network engineers with granular control over BGP routing decisions, both internally and externally.
Question 87:
A switchport configured as a trunk is dropping VLAN traffic. Which command can verify allowed VLANs?
A) show vlan brief
B) Show interfaces trunk
C) show running-config
D) show spanning-tree
Answer: B) show interfaces trunk
Explanation:
Show interfaces trunk displays trunking mode, allowed VLANs, and native VLAN, helping diagnose VLAN pruning or misconfigurations. Show vlan brief lists VLANs locally, not on trunks. The command show interfaces trunk is an essential tool for verifying and troubleshooting VLAN propagation across trunk links in a Cisco switched network. It provides detailed information about all trunked interfaces, including the trunking mode (whether the port is operating as a trunk or access port), the VLANs allowed on the trunk, and the native VLAN. This is crucial for diagnosing connectivity issues between switches, particularly when VLANs are not communicating as expected or when VLAN pruning has been applied. By displaying which VLANs are actively passing through the trunk and the operational status of each trunk interface, network engineers can quickly identify misconfigurations or inconsistencies in the VLAN setup.
Other related commands serve different purposes but do not offer the same level of trunk-specific detail. The show vlan brief lists all VLANs configured on the switch along with the ports assigned to them, but it does not indicate which interfaces are trunking or which VLANs are allowed on trunks. Show running-config displays the complete switch configuration, including trunk settings, but requires manual inspection and interpretation to verify trunk details. Show spanning-tree provides information about the spanning-tree protocol status, port roles, and states, which helps prevent loops, but it does not give explicit details about VLAN membership or trunk configuration.
By using the show interfaces trunk command, administrators can efficiently verify trunk configuration, ensure correct VLAN propagation across the network, and quickly troubleshoot VLAN communication issues, making it an indispensable command in switched network management.
Question 88:
Which Cisco wireless feature provides fast client handoff with pre-shared security keys?
A) WPA2
B) 802.11r
C) FlexConnect
D) 802.1X
Answer: B) 802.11r
Explanation:
802.11r (Fast Roaming) allows clients to roam between APs quickly, exchanging keys efficiently without full reauthentication. FlexConnect supports branch AP switching, 802.1X handles authentication, and WPA2 encryption. 802.11r, also known as Fast Roaming, is a wireless standard designed to improve client mobility in enterprise Wi-Fi networks. In traditional WLANs, when a client moves from one access point (AP) to another, it must go through a full 802.1X or WPA2 authentication process. This process can take hundreds of milliseconds, which may disrupt time-sensitive applications like voice over Wi-Fi (VoWiFi) or video conferencing. 802.11r addresses this by enabling fast transition (FT) key exchanges between APs, allowing clients to roam seamlessly without full reauthentication. The result is reduced latency, minimal packet loss, and uninterrupted sessions for mobile clients.
FlexConnect is a Cisco technology for branch APs that allows them to locally switch traffic when the AP loses connectivity to the wireless controller. FlexConnect provides resilience and flexibility for branch deployments but does not directly affect fast roaming or the key exchange process.
802.1X is an authentication framework used to control network access at the port or AP level. While essential for security and granting access to legitimate users, 802.1X alone does not optimize roaming performance. It provides the credentials verification needed before a client joins the network, but does not address the delays associated with roaming between APs.
WPA2 is a Wi-Fi security protocol that provides encryption using AES-CCMP. It ensures the confidentiality and integrity of wireless traffic but does not impact roaming performance. WPA2 protects data while 802.11r ensures clients can move efficiently across APs without dropping connections.
In combination, 802.11r, 802.1X, and WPA2 provide secure, high-performance wireless connectivity. 802.11r optimizes roaming by minimizing latency during AP handoffs, 802.1X ensures only authorized users gain access, and WPA2 secures the communication. FlexConnect adds flexibility for branch deployments but is separate from the fast roaming mechanism. This layered approach ensures enterprise wireless networks are both secure and performance-optimized for mobile applications.
Question 89:
Which QoS mechanism smooths traffic bursts and buffers excess packets?
A) Policing
B) Shaping
C) LLQ
D) CBWFQ
Answer: B) Shaping
Explanation:
Traffic shaping delays excess packets using a buffer to match a configured output rate. Policing drops or marks packets exceeding limits. LLQ and CBWFQ manage queue priorities but do not buffer for rate matching. Traffic management is a critical aspect of Quality of Service (QoS) in modern networks, ensuring that critical applications receive the necessary bandwidth while preventing congestion. Among the various mechanisms, traffic shaping and policing are key tools, each with distinct purposes and behaviors.
Traffic shaping is a mechanism that regulates the flow of traffic by buffering packets when the transmission rate exceeds a configured output rate. This allows the network to smooth out bursts and match the outgoing traffic to a defined bandwidth profile. By delaying excess packets rather than dropping them, shaping ensures that traffic conforms to network policies, reducing the likelihood of congestion and packet loss. It is particularly useful for applications that are sensitive to loss, such as voice and video, because it preserves packets while controlling the flow.
In contrast, policing enforces traffic limits by actively dropping or marking packets that exceed a predefined rate. Policing is simpler and more immediate, but can lead to packet loss if traffic bursts occur. It is often used for enforcing Service Level Agreements (SLAs) or restricting specific traffic flows rather than preserving all packets.
Class-Based Weighted Fair Queuing (CBWFQ) and Low-Latency Queuing (LLQ) are queueing mechanisms designed to manage packet scheduling rather than controlling rates directly. CBWFQ allocates bandwidth to different traffic classes based on configured weights, ensuring fair sharing among flows. LLQ extends CBWFQ by adding strict priority queues for delay-sensitive traffic such as voice, guaranteeing low latency for high-priority traffic. However, unlike shaping, these mechanisms do not buffer excess traffic to conform to a bandwidth rate; they only control the order and bandwidth allocation of queued packets.
In summary, shaping and policing control the volume of traffic differently—shaping smooths bursts, while policing enforces hard limits. CBWFQ and LLQ focus on prioritizing and scheduling traffic. Understanding the differences allows network engineers to design QoS strategies that balance performance, reliability, and fairness across various applications. Proper deployment ensures optimal use of network resources, minimal congestion, and a consistent user experience across critical services.
Question 90:
Which MPLS router type assigns labels to incoming packets based on FEC?
A) CE
B) PE
C) P
D) LER
Answer: D) LER
Explanation:
Label Edge Routers (LERs) push labels onto packets entering the MPLS network, assigning them to Forwarding Equivalence Classes (FECs). P routers forward labels, and PE and CE are at the edges. In an MPLS (Multiprotocol Label Switching) network, understanding the roles of different routers is essential for proper traffic forwarding and efficient network operation. The primary components include Label Edge Routers (LERs), Provider Edge (PE) routers, Provider (P) routers, and Customer Edge (CE) routers, each serving a specific function in the MPLS architecture.
Label Edge Routers (LERs) operate at the edge of the MPLS network and play a crucial role in classifying incoming packets into Forwarding Equivalence Classes (FECs). When a packet enters the MPLS cloud, the LER examines its Layer 3 header, determines the appropriate FEC, and assigns an MPLS label to it. This label dictates the packet’s path through the MPLS core, allowing routers in the network to forward packets based on labels rather than IP routing tables. LERs also remove labels from outgoing packets destined for non-MPLS networks, ensuring seamless integration with traditional IP routing.
Provider Edge (PE) routers are sometimes synonymous with LERs and are positioned at the boundary of the service provider’s network. They push and pop labels for traffic entering or leaving the MPLS domain, providing connectivity to CE routers. P routers, located in the core of the MPLS network, do not originate or terminate traffic but forward labeled packets based solely on the MPLS label information. This label-switching process enhances speed and efficiency, as core routers do not need to examine the IP header.
Customer Edge (CE) routers reside at the customer’s side of the network and are typically unaware of MPLS labels. They exchange routing information with PE routers but rely on the service provider to handle label assignment and MPLS forwarding.
Together, these components enable MPLS to provide scalable, high-performance routing with features like traffic engineering, VPNs, and QoS. LERs are especially critical because they define how packets enter the MPLS network and determine the FEC-based paths through the core, ensuring proper forwarding, reduced complexity in the core, and optimized use of network resources.
Question 91:
A router receives multiple BGP paths; the path with the highest LOCAL_PREF is selected. This selection occurs:
A) Within the same AS
B) Between different ASes
C) Based on AS_PATH length
D) Randomly
Answer: A) Within the same AS
Explanation:
LOCAL_PREF is a BGP attribute used inside an AS to determine the preferred outbound path. Higher values are preferred. MED is used between ASes, AS_PATH prevents loops, and BGP selection is deterministic, not random. In BGP (Border Gateway Protocol), path selection relies on multiple attributes that help routers choose the optimal route. One of the most important attributes for influencing outbound traffic within a single Autonomous System (AS) is LOCAL_PREF (Local Preference). LOCAL_PREF is a well-known, discretionary attribute that routers use to determine which exit path should be preferred when multiple routes to the same destination are available. The higher the LOCAL_PREF value, the more preferred the route becomes. Network administrators can configure LOCAL_PREF to influence traffic flows, for example, preferring certain links for outgoing traffic to optimize performance, cost, or policy requirements.
LOCAL_PREF is only used within the same AS and is propagated to all iBGP (internal BGP) peers. It does not affect routing decisions between different ASes. In contrast, MED (Multi-Exit Discriminator) is used between ASes to indicate the preferred entry point into an AS. Lower MED values are preferred, allowing a neighboring AS to influence which path it should use to enter a remote AS.
Another critical attribute is AS_PATH, which lists all ASes a route has traversed. AS_PATH helps prevent routing loops and is considered after LOCAL_PREF and weight when selecting the best path. Routes with shorter AS_PATH lengths are generally preferred.
It is also important to note that BGP path selection is deterministic; routes are never chosen randomly. The BGP decision process follows a strict sequence: first, weight (Cisco-specific), then LOCAL_PREF, AS_PATH, origin type, MED, and so on, ensuring predictable routing behavior. Random selection does not occur unless multiple paths are completely equal in all relevant attributes, in which case load balancing may be applied based on configuration.
By understanding these attributes, network engineers can effectively control routing within an AS, influence traffic between ASes, and maintain stable and predictable interdomain routing in complex network topologies. LOCAL_PREF is key for internal traffic engineering, MED for inter-AS influence, and AS_PATH for loop prevention.
Question 92:
Which SD-WAN policy type prioritizes traffic based on SLA metrics like jitter and packet loss?
A) Data policy
B) Control policy
C) Application-aware routing (AAR) policy
D) QoS trust policy
Answer: C) Application-aware routing (AAR) policy
Explanation:
AAR policy selects paths dynamically based on real-time WAN performance metrics, optimizing critical application delivery. Data policies enforce routing, control policies manage devices, and QoS policies mark traffic. In Cisco SD-WAN, Application-Aware Routing (AAR) policies play a critical role in ensuring optimal application performance across the WAN by dynamically selecting paths based on real-time network performance metrics. These metrics, such as jitter, latency, packet loss, and throughput, are continuously monitored using IP Service Level Agreements (IP SLAs). AAR evaluates multiple paths between sites and chooses the path that best meets the application’s requirements, ensuring critical traffic like voice, video, or enterprise applications experiences minimal degradation. This dynamic path selection helps maintain service quality, avoid congestion, and improve user experience.
While AAR policies focus on optimizing traffic paths for applications, data policies govern the handling of traffic flows within the SD-WAN fabric. Data policies allow administrators to define which traffic is allowed, blocked, or redirected, and can include firewall rules, NAT, or specific routing decisions. They provide granular control over how data moves through the network, but do not automatically adjust paths based on performance.
Control policies, on the other hand, manage the behavior of the SD-WAN control plane. They dictate how devices authenticate, join the overlay network, and communicate with vSmart controllers. Control policies ensure the network’s management and routing functions are secure, consistent, and properly orchestrated across all devices, but they do not influence per-application path selection.
Finally, QoS trust policies are applied to classify and mark traffic based on DSCP or CoS values. These policies influence how traffic is treated in terms of priority or bandwidth allocation, but do not dynamically select optimal paths based on WAN performance metrics.
By combining these policies, SD-WAN provides both granular control and intelligent routing. AAR policies complement data, control, and QoS policies by dynamically optimizing traffic flows, ensuring critical applications receive the best possible network performance while maintaining security, compliance, and policy adherence across the WAN. This separation of roles ensures that traffic optimization, policy enforcement, and control-plane management operate efficiently and independently.
Question 93:
A network engineer wants to verify which VLANs exist on a switch and their status. Which command should they use?
A) Show interfaces trunk
B) show vlan brief
C) show running-config
D) show spanning-tree
Answer: B) show vlan brief
Explanation:
Show vlan brief lists VLAN IDs, names, status, and assigned ports, helping verify active VLANs. Trunk commands verify propagation, and spanning-tree shows loop prevention. In a Cisco switched network, managing VLANs effectively is essential for proper segmentation, security, and traffic flow. The command show vlan brief provides a concise overview of all VLANs configured on a switch. It displays the VLAN ID, name, status (active or suspended), and the interfaces assigned to each VLAN. This is particularly useful for network administrators to verify that VLANs are correctly configured and active on the switch. It allows quick identification of missing VLAN assignments or misconfigured ports that could affect connectivity for devices within those VLANs.
While the show vlan brief command provides information about VLAN presence and port assignments, it does not indicate whether trunk links are configured properly or which VLANs are allowed across those trunks. That’s where the show interface trunk becomes essential. This command lists all trunked interfaces, their operational status, the native VLAN, and the VLANs allowed across the trunk. This helps administrators troubleshoot issues where VLANs are not propagating correctly between switches, potentially preventing communication between devices in the same VLAN but on different switches.
The show running-config command displays the full current configuration of the switch, including VLAN definitions, interface settings, trunk configurations, and other protocols. While comprehensive, it requires manual inspection to identify trunking or VLAN-related issues, which is less efficient for quick troubleshooting.
Finally, show spanning-tree provides insight into the spanning-tree protocol (STP) status on the switch. It shows root bridges, port roles, and state, helping prevent loops in Layer 2 networks. Although STP is not directly related to VLAN verification, it is important when troubleshooting network connectivity issues, especially when certain VLANs appear inactive or blocked due to STP decisions.
By using the show vlan brief in combination with trunk, running-config, and spanning-tree commands, administrators can comprehensively manage VLANs, verify proper trunking, and ensure loop-free operation across the network. This ensures reliable communication and optimal network performance.
Question 94:
Which feature allows Cisco TrustSec to enforce policies based on user roles rather than IP addresses?
A) VLANs
B) Security Group Tags (SGTs)
C) ACLs
D) Port-based authentication
Answer: B) Security Group Tags (SGTs)
Explanation:
SGTs allow policy enforcement based on roles or device types, abstracting traffic from IP addresses. VLANs provide Layer 2 segmentation, ACLs filter traffic, and port-based authentication controls access at a single interface. Cisco TrustSec is a modern network security framework that emphasizes policy-based segmentation rather than relying solely on traditional network constructs like IP addresses or VLANs. The core element of TrustSec is the Security Group Tag (SGT), which allows administrators to assign a role or classification to users, devices, or endpoints. These tags are then used by network devices to enforce access control policies dynamically across the network, ensuring that traffic is treated according to its assigned role regardless of the underlying IP addressing or VLAN structure. This abstraction simplifies policy management and improves security in large, dynamic environments.
While VLANs provide basic Layer 2 segmentation by separating traffic into distinct broadcast domains, they are static and require manual configuration on switches and trunks. VLANs cannot dynamically enforce security policies based on user roles or device types, and managing large numbers of VLANs across multiple sites can be complex and error-prone.
Access Control Lists (ACLs), on the other hand, filter traffic based on IP addresses, protocols, and ports. While ACLs can control traffic flows effectively, they do not inherently provide dynamic policy enforcement based on user roles or the context of the device. ACLs must be manually maintained and updated as network requirements change.
Port-based authentication, such as 802.1X, ensures that devices are authenticated before gaining access to the network. However, this mechanism primarily controls initial access at the switch port and does not provide ongoing segmentation or role-based policy enforcement once the device is connected.
By leveraging SGTs, Cisco TrustSec allows for a more flexible and scalable approach to network security. Policies can follow users or devices across the network, and enforcement can occur at Layer 2 or Layer 3 without being tied to IP addresses or VLANs. This makes SGTs a powerful tool for modern enterprise networks where mobility, scalability, and dynamic security policies are essential. Combining SGTs with traditional VLANs, ACLs, and authentication mechanisms provides a layered security model, enhancing both access control and network segmentation.
Question 95:
In MPLS, which router forwards packets based only on the label and does not inspect the IP header?
A) CE
B) PE
C) P
D) LER
Answer: C) P
Explanation:
P routers (core routers) forward MPLS packets based on label lookup, without examining the IP header. LER/PE routers handle label push/pop at the network edge, while CE routers are outside the MPLS cloud. In an MPLS (Multiprotocol Label Switching) network, different types of routers have distinct roles to ensure efficient packet forwarding and end-to-end connectivity. P routers, or Provider routers, are core routers within the MPLS network. Their primary function is to forward packets based on MPLS labels rather than performing traditional IP routing lookups. When a labeled packet arrives, a P router examines the label and swaps it with the next-hop label before forwarding it. This label-switching mechanism enables high-speed forwarding and reduces the processing overhead associated with IP routing. P routers do not add or remove labels; they purely operate within the MPLS core to transit traffic between edge devices.
PE routers, also referred to as Label Edge Routers (LERs) in some contexts, reside at the edge of the provider network. They are responsible for pushing MPLS labels onto incoming packets from Customer Edge (CE) routers and popping labels off packets destined for the CE network. PE routers act as the boundary between the MPLS core and the customer network, handling tasks such as label assignment, VPN encapsulation, and route advertisement into the MPLS network. LER and PE are often used interchangeably because they perform similar edge functions.
CE routers are devices located on the customer side of the MPLS cloud. They connect to the provider network but do not participate in MPLS label forwarding. Instead, CE routers typically perform standard IP routing and rely on the PE/LER routers to handle MPLS encapsulation and forwarding.
Understanding these roles is essential for designing and troubleshooting MPLS networks. P routers ensure high-speed core transit, PE/LER routers manage edge functions and label operations, and CE routers interface with customer networks. This separation of duties allows MPLS to provide scalable, efficient, and flexible Layer 3 VPN services while maintaining high performance across the provider network. Each router type plays a crucial role in the end-to-end delivery of MPLS services.
Question 96:
Which command verifies all BGP routes received from a specific neighbor?
A) show ip bgp
B) show ip bgp summary
C) show ip bgp neighbors <neighbor> routes
D) show ip route bgp
Answer: C) show ip bgp neighbors <neighbor> routes
Explanation:
This command displays all routes received from a BGP neighbor, including attributes like next-hop, AS_PATH, and MED. The summary shows neighbor status, show ip bgp shows all routes, and show ip route bgp shows installed BGP routes. In BGP (Border Gateway Protocol), monitoring and troubleshooting routing information is critical for maintaining network stability and ensuring proper path selection. The command show ip bgp neighbors <neighbor> routes provides detailed information about all routes received specifically from a particular BGP neighbor. It displays each route’s next-hop, AS_PATH, origin type, MED (Multi-Exit Discriminator), and other BGP attributes. This neighbor-specific view is essential for network engineers to understand exactly what paths a particular peer is advertising, verify policy application, and troubleshoot routing inconsistencies or connectivity issues with that neighbor.
Other related BGP commands serve complementary purposes but do not provide the same level of neighbor-specific detail. Show ip bgp summary gives a concise overview of all BGP neighbors and their states, such as Idle, Active, or Established, along with prefixes received and uptime. It is useful for quickly assessing the overall health of BGP sessions, but it does not show route details.
Show ip bgp displays the entire BGP table on the router, listing all routes learned from all neighbors and their associated attributes. While this provides a global perspective, it can be overwhelming in large networks and does not allow filtering by a specific neighbor.
Show ip route bgp shows only the BGP-learned routes that are currently installed in the IP routing table. This view indicates which BGP paths the router is actively using to forward traffic, but it does not display all routes received from peers, particularly those not selected as the best path.
By using show ip bgp neighbors <neighbor> routes, network administrators gain granular insight into a neighbor’s advertised routes, enabling precise troubleshooting, validation of routing policies, and optimization of BGP path selection. This neighbor-focused command is indispensable for maintaining predictable and reliable BGP behavior in complex networks.
Question 97:
In OSPF, which LSA type summarizes routes from one area to another?
A) Type 1
B) Type 2
C) Type 3
D) Type 5
Answer: C) Type 3
Explanation:
Type 3 LSAs are summary LSAs generated by ABRs to advertise intra-area routes to other areas. Type 1/2 LSAs describe routers and networks within an area, and Type 5 LSAs describe external routes. In OSPF (Open Shortest Path First), Link-State Advertisements (LSAs) are the building blocks of the routing protocol, enabling routers to exchange network topology information. Each LSA type has a specific role in propagating routing information across OSPF areas and between autonomous systems. Type 3 LSAs, known as Summary LSAs, are generated by Area Border Routers (ABRs) to advertise routes from one area to another. These LSAs summarize networks from within an area and share them with other areas, which helps reduce the size of the link-state database in non-originating areas and improves overall scalability. Type 3 LSAs ensure that routers in different areas know how to reach destinations outside their local area without flooding the network with detailed internal topology.
Type 1 LSAs describe router LSAs, containing information about the router’s interfaces, states, and directly connected networks within an area. These LSAs are flooded only within the originating area. Type 2 LSAs describe network LSAs, generated by the designated router (DR) for multi-access networks, and provide details about all routers connected to the network segment. Both Type 1 and Type 2 LSAs are confined to their respective areas and are essential for constructing the area’s link-state database.
Type 5 LSAs, on the other hand, are external LSAs created by Autonomous System Boundary Routers (ASBRs) to advertise routes learned from external sources, such as other routing protocols, into the OSPF domain. Type 5 LSAs are flooded across all areas except stub areas, providing reachability to destinations outside the OSPF autonomous system.
By understanding these LSA types, network engineers can properly design OSPF hierarchies. Type 3 LSAs specifically optimize inter-area routing by summarizing internal routes, reducing the amount of routing information exchanged, improving convergence, and maintaining efficiency across large-scale OSPF networks. This differentiation ensures that OSPF remains scalable and manageable in enterprise environments with multiple areas.
Question 98:
A network engineer wants to ensure a router drops excess traffic exceeding a configured rate. Which mechanism should they use?
A) Shaping
B) Policing
C) LLQ
D) CBWFQ
Answer: B) Policing
Explanation:
Policing enforces traffic limits by dropping or marking packets exceeding a configured rate. Shaping buffers excess packets, LLQ prioritizes traffic, and CBWFQ allocates bandwidth per class. In Cisco networks, Quality of Service (QoS) mechanisms are essential for managing traffic efficiently, especially in environments with limited bandwidth or latency-sensitive applications like voice and video. Policing is a QoS technique that enforces a maximum traffic rate on a link or interface. When traffic exceeds the configured rate, excess packets are either dropped or remarked, making policing a strict mechanism for controlling bandwidth usage. Policing is often used to enforce Service Level Agreements (SLAs) or to prevent a single user or application from consuming excessive bandwidth.
Shaping, in contrast, is a traffic-smoothing mechanism that buffers excess packets instead of dropping them. By queuing packets and sending them out at a configured rate, shaping smooths bursts in traffic, ensuring steady output while conforming to bandwidth limitations. This makes shaping ideal for applications that are sensitive to packet loss, like VoIP.
LLQ (Low-Latency Queuing) is designed for prioritizing time-sensitive traffic, such as voice. LLQ creates a strict-priority queue for high-priority traffic while simultaneously supporting class-based weighted fair queuing for other traffic classes. This ensures minimal delay for critical applications without starving lower-priority traffic.
CBWFQ (Class-Based Weighted Fair Queuing) allows bandwidth to be allocated per traffic class. Unlike LLQ, CBWFQ does not create strict-priority queues; instead, it ensures fair bandwidth distribution among classes while honoring configured weights.
In summary, policing enforces hard traffic limits, shaping buffers excess traffic to smooth bursts, LLQ gives strict priority to critical flows, and CBWFQ allocates bandwidth among multiple classes. Choosing the right mechanism depends on whether the goal is strict enforcement, traffic smoothing, prioritization, or fair allocation of resources.
Question 99:
Which wireless protocol prevents loops in access point redundancy scenarios?
A) STP
B) RSTP
C) PVST+
D) None
Answer: D) None
Explanation:
Wireless APs manage redundancy internally through the controller for client roaming and failover. STP/RSTP/PVST+ are Layer 2 protocols for wired networks and are not used in wireless AP redundancy. In wireless networks, redundancy and client failover are handled internally by Access Points (APs) in coordination with the wireless controller. This ensures seamless roaming, load balancing, and high availability without relying on traditional Layer 2 protocols. STP (Spanning Tree Protocol), RSTP (Rapid Spanning Tree Protocol), and PVST+ (Per-VLAN Spanning Tree Plus) are designed for wired Ethernet networks to prevent loops and manage redundant links, but they do not apply to wireless AP redundancy. Wireless controllers and APs manage failover dynamically, maintaining session continuity for clients while optimizing network performance and reliability.
Question 100:
Which SD-WAN component provides a GUI-based management platform for configuration, monitoring, and policy deployment?
A) vSmart
B) vBond
C) vManage
D) vEdge
Answer: C) vManage
Explanation:
vManage provides a centralized GUI for configuration, template deployment, monitoring, and troubleshooting. vSmart handles control plane routing, vBond handles onboarding, and vEdge is the data-plane device. In Cisco SD-WAN, vManage serves as the centralized management platform, providing a GUI for configuration, template deployment, monitoring, and troubleshooting of the entire overlay network. It allows administrators to define policies, push configurations to devices, and monitor network health from a single interface. vSmart controllers handle the control plane, distributing routing and security policies to vEdge devices. vBond orchestrates zero-touch provisioning and initial authentication of devices joining the overlay, while vEdge routers operate at the data plane, forwarding user traffic across secure IPsec tunnels. Together, these components enable a scalable, secure SD-WAN deployment.