Click here to access our full set of Cisco 350-601 exam dumps and practice tests.
Question 1:
Which OSPF network type reduces LSA flooding in multi-access networks by electing a designated router (DR) and backup designated router (BDR)?
A) Point-to-Point
B) Broadcast
C) Non-Broadcast Multi-Access (NBMA)
D) Point-to-Multipoint
Answer: B) Broadcast
Explanation:
In multi-access networks like Ethernet, if every router sent LSAs (Link-State Advertisements) to all other routers, it would create excessive traffic. OSPF reduces this by electing a Designated Router (DR), which is responsible for sending LSAs to all routers in the segment, while a Backup Designated Router (BDR) is ready to take over if the DR fails. The DR election is based on OSPF priority, and the router with the highest priority is preferred. This mechanism prevents unnecessary LSA flooding and improves network stability and scalability.
In OSPF, network types significantly influence how routers exchange LSAs and establish adjacencies. Point-to-Point networks, such as serial links, connect only two routers, so OSPF does not require a Designated Router (DR); LSAs are exchanged directly between the two devices. Broadcast networks, like Ethernet, can have multiple routers on the same segment. Without a DR, each router would send LSAs to all others, causing excessive traffic and unnecessary CPU load. To prevent this, OSPF elects a Designated Router (DR) to act as the central point for LSA distribution, and a Backup Designated Router (BDR) is elected to take over if the DR fails, ensuring stability and continuity. Non-Broadcast Multi-Access (NBMA) networks, such as Frame Relay, also support multiple routers but do not support automatic broadcast. OSPF still elects a DR/BDR to reduce LSA flooding, but neighbors must often be manually configured. Point-to-Multipoint networks resemble multiple point-to-point links combined into a single logical interface; OSPF may treat them as multiple point-to-point connections or elect a DR depending on configuration. By using DR/BDR in multi-access networks, OSPF minimizes unnecessary LSA flooding, reduces bandwidth consumption, and ensures scalable, efficient routing.
Question 2:
Which feature allows Cisco devices to dynamically negotiate VLAN trunking protocol (VTP) settings across switches?
A) CDP
B) STP
C) VTP
D) LLDP
Answer: C) VTP
Explanation
The VLAN Trunking Protocol (VTP) enables Cisco switches to share VLAN configuration information dynamically. VTP helps maintain consistency of VLANs across a switched network. VTP has modes such as Server, Client, and Transparent. In Server mode, switches can create, modify, and delete VLANs. In Client mode, switches receive VLAN information but cannot make changes. Transparent mode allows switches to forward VTP information without updating their VLAN database. This reduces administrative overhead.
In a Cisco switched network, several protocols work together to maintain topology, share information, and ensure efficient operation. CDP (Cisco Discovery Protocol) is a Layer 2 protocol that allows Cisco devices to discover directly connected neighbors, sharing device IDs, IP addresses, and capabilities. This is useful for network mapping and troubleshooting. STP (Spanning Tree Protocol) prevents loops in a Layer 2 network by electing a root bridge and placing redundant paths in a blocking state. STP ensures that broadcast storms do not occur, maintaining network stability. VTP (VLAN Trunking Protocol) is a Cisco-proprietary protocol that dynamically propagates VLAN configuration information across switches. VTP helps maintain VLAN consistency, reducing administrative overhead and configuration errors. Switches in Server mode can create, modify, or delete VLANs; Client mode switches receive updates but cannot make changes, and Transparent mode switches forward VTP information without affecting their VLAN database. Finally, LLDP (Link Layer Discovery Protocol) is an open-standard protocol similar to CDP, allowing devices from different vendors to advertise their identity and capabilities to neighbors. Together, these protocols improve network efficiency: CDP and LLDP aid in discovery, STP prevents loops, and VTP ensures consistent VLAN configurations across the network, simplifying management and enhancing scalability.
Question 3:
Which routing protocol uses a composite metric of bandwidth and delay to select the best path by default?
A) OSPF
B) EIGRP
C) RIP
D) BGP
Answer: B) EIGRP
Explanation
Enhanced Interior Gateway Routing Protocol (EIGRP) uses a composite metric based on bandwidth, delay, load, and reliability. By default, only bandwidth and delay are considered for route selection. This allows EIGRP to choose the most efficient path for data, unlike RIP, which relies solely on hop count. EIGRP is also a hybrid protocol, combining features of both distance-vector and link-state protocols, allowing fast convergence and loop-free routing.
Enhanced Interior Gateway Routing Protocol (EIGRP) is an advanced, Cisco-proprietary routing protocol that uses a composite metric to determine the best path to a destination. This metric considers bandwidth, delay, load, and reliability, but by default, only bandwidth and delay influence route selection. This allows EIGRP to choose more efficient paths than simpler protocols like RIP, which relies solely on hop count and cannot consider link quality. EIGRP is often referred to as a hybrid protocol because it combines the advantages of distance-vector protocols, such as neighbor discovery and periodic updates, with certain link-state features, including partial updates and knowledge of the network topology. This hybrid nature enables fast convergence, meaning the network can quickly adapt to changes, such as link failures, while maintaining loop-free routing using the Diffusing Update Algorithm (DUAL). In contrast, OSPF, another link-state protocol, calculates the shortest path based on cost derived from bandwidth but operates differently, relying on a complete link-state database. BGP, an exterior gateway protocol, focuses on policy-based routing between autonomous systems. By using a composite metric, EIGRP provides a balance between efficiency, speed, and stability, making it suitable for complex enterprise networks where fast, reliable routing is critical.
Question 4.
Which protocol is responsible for translating IP addresses to MAC addresses on a LAN?
A) ARP
B) DNS
C) DHCP
D) ICMP
Answer: A) ARP
Explanation
The Address Resolution Protocol (ARP) resolves IPv4 addresses to corresponding MAC addresses on a local network. When a host needs to communicate within a LAN, it sends an ARP request to discover the MAC address of the destination IP. The device with the matching IP responds with its MAC, enabling communication at the Data Link Layer. ARP is critical in LAN environments and is transparent to higher-layer protocols.
The Address Resolution Protocol (ARP) is a fundamental protocol in IPv4 networks that maps IP addresses to their corresponding MAC addresses at the Data Link Layer. When a host wants to send data to another device on the same local area network (LAN), it must know the destination’s MAC address. The host sends an ARP request, which is a broadcast query asking, “Who has this IP address?” The device with the matching IP responds with its MAC address, allowing the sender to forward frames at Layer 2. This process is transparent to higher-layer protocols, meaning TCP, UDP, or application protocols do not need to manage MAC addresses directly.
Other protocols like DNS (Domain Name System) translate human-readable domain names into IP addresses, DHCP (Dynamic Host Configuration Protocol) automatically assigns IP addresses and network configuration to devices, and ICMP (Internet Control Message Protocol) provides error messages and diagnostic functions, such as ping. While these protocols operate at different layers or for different purposes, ARP is essential for LAN communication, ensuring devices can physically address frames correctly. Without ARP, IP packets could not be delivered on Ethernet networks, making it a critical link between the Network and Data Link layers in IPv4 environments.
Question 5:
In a Cisco SD-WAN deployment, which component acts as the orchestrator of device configurations?
A) vSmart Controller
B) vBond Orchestrator
C) vEdge Router
D) vManage NMS
Answer: D) vManage NMS
Explanation:
In Cisco SD-WAN, vManage Network Management System (NMS) is the central GUI-based management platform. It is responsible for device provisioning, policy creation, template deployment, monitoring, and troubleshooting. vSmart controllers handle control plane routing, and vBond facilitates secure initial device onboarding. vEdge routers act as data plane devices to forward traffic. The vManage NMS simplifies large-scale WAN management by centralizing configuration and monitoring.
In Cisco SD-WAN, the architecture is composed of several key components, each serving a distinct role to ensure efficient, secure, and manageable wide-area network operations. The vManage Network Management System (NMS) is the central GUI-based management platform. It provides administrators with a single pane of glass for device provisioning, policy creation, template deployment, monitoring, and troubleshooting, making large-scale WAN management significantly easier. The vSmart controllers handle the control plane, distributing routing information, security policies, and encryption keys to vEdge routers. This ensures that the data plane operates securely and efficiently, with consistent policy enforcement across all WAN edges. The vBond orchestrator is critical during initial device onboarding, providing zero-touch provisioning (ZTP) and authenticating new devices before they join the network. It also facilitates secure communication between vEdge routers and vSmart controllers. The vEdge routers are the data plane devices, responsible for forwarding actual traffic across the WAN while enforcing policies received from vSmart controllers. By separating management, control, and data planes, Cisco SD-WAN provides scalability, security, and centralized management. The vManage NMS, in particular, simplifies complex WAN operations by allowing administrators to configure multiple sites, deploy templates, and monitor network health from a centralized dashboard.
Question 6
Which protocol does Cisco TrustSec use to enforce security policies based on user roles?
A) 802.1X
B) MACsec
C) SGT (Security Group Tag)
D) TACACS+
Answer: C) SGT (Security Group Tag)
Explanation:
Cisco TrustSec implements Security Group Tags (SGTs) to classify traffic based on user roles or device types. These tags are applied at Layer 2 or Layer 3 and enforce security policies dynamically through network devices. SGTs allow policy-based access control without relying solely on IP addresses, improving flexibility and security. 802.1X handles authentication, while TACACS+ focuses on AAA management.
Cisco TrustSec is a security architecture that provides dynamic, role-based access control across a network using Security Group Tags (SGTs). SGTs classify traffic based on user roles or device types rather than relying solely on IP addresses, allowing network devices to enforce policy-based security dynamically. These tags can be applied at Layer 2 or Layer 3, enabling consistent enforcement across wired and wireless networks. By using SGTs, administrators can define flexible access policies, segment users or devices logically, and ensure sensitive resources are protected without the need for static VLANs or complex ACL configurations.
Other security protocols complement SGTs in network protection. 802.1X provides port-based network access control and is commonly used to authenticate users or devices before granting network access. MACsec (Media Access Control Security) secures traffic at Layer 2 by encrypting Ethernet frames, preventing eavesdropping or tampering on physical links. TACACS+ focuses on centralized AAA (Authentication, Authorization, and Accounting) management, controlling administrative access to network devices rather than directly classifying or securing user traffic. Together, these protocols enhance overall network security, with SGTs enabling dynamic role-based policy enforcement, 802.1X providing authentication, MACsec ensuring link-level encryption, and TACACS+ managing administrative access, creating a robust, multi-layered security framework for modern enterprise networks.
Question 7:
Which BGP attribute is primarily used to influence path selection in an autonomous system?
A) AS_PATH
B) LOCAL_PREF
C) NEXT_HOP
D) MED
Answer: B) LOCAL_PREF
Explanation:
LOCAL_PREF (Local Preference) is a BGP attribute used within an autonomous system (AS) to prefer one path over another. Higher LOCAL_PREF values are preferred. It helps network engineers control outbound traffic. AS_PATH, by contrast, is used to prevent loops and influence path selection between ASes, while MED influences the preferred path when multiple ASes advertise the same prefix.
In BGP (Border Gateway Protocol), several attributes influence path selection, allowing network engineers to control routing behavior both within and between autonomous systems (ASes). One important BGP attribute is LOCAL_PREF (Local Preference), which is used within an AS to indicate the preferred path for outbound traffic. Higher LOCAL_PREF values are preferred, allowing administrators to prioritize certain links for sending traffic out of the AS. This is particularly useful in multi-homed networks where traffic engineering is required to optimize bandwidth usage, reduce latency, or enforce policy.
Other BGP attributes serve different purposes. AS_PATH records the list of autonomous systems a route has traversed, helping prevent routing loops and influencing path selection between ASes; shorter AS_PATHs are generally preferred. NEXT_HOP specifies the IP address of the next router that should be used to reach a destination, ensuring proper forwarding of packets. MED (Multi-Exit Discriminator) is used to influence path selection between autonomous systems, indicating to external neighbors which path is preferred when multiple links advertise the same prefix; lower MED values are preferred.
By understanding these attributes, network engineers can effectively control both intra-AS traffic flows with LOCAL_PREF and inter-AS routing decisions using AS_PATH, NEXT_HOP, and MED, providing granular traffic engineering capabilities in complex BGP deployments.
Question 8:
In a QoS deployment, which mechanism is used to delay packets during congestion while avoiding packet drops?
A) WRED
B) CBWFQ
C) LLQ
D) Policing
Answer: A) WRED
Explanation:
Weighted Random Early Detection (WRED) is a congestion avoidance mechanism that preemptively drops packets with a probability based on queue size and precedence. By dropping lower-priority packets early, WRED prevents tail drop situations and reduces overall latency. Class-Based Weighted Fair Queuing (CBWFQ) and Low-Latency Queuing (LLQ) focus on traffic shaping and priority scheduling, while policing simply drops or marks packets exceeding the rate.
Weighted Random Early Detection (WRED) is an advanced congestion avoidance mechanism used in network devices to prevent queue overflows and maintain optimal performance. Unlike traditional tail drop, where packets are discarded only when a queue is full, WRED preemptively drops packets based on probability, taking into account the queue size and packet precedence. This approach preferentially drops lower-priority traffic first, helping avoid congestion and minimizing the likelihood of TCP global synchronization, which can degrade network performance. By selectively dropping packets early, WRED reduces latency, packet loss, and jitter, particularly in high-traffic environments.
Other QoS mechanisms serve complementary purposes. Class-Based Weighted Fair Queuing (CBWFQ) divides traffic into classes and allocates bandwidth fairly among them, ensuring each class receives a minimum guaranteed rate. Low-Latency Queuing (LLQ) extends CBWFQ by adding a strict-priority queue for delay-sensitive traffic, such as voice or video, to reduce latency. Policing, in contrast, enforces traffic rate limits by dropping or remarking packets that exceed a defined threshold, without buffering or shaping excess traffic.
Together, these mechanisms allow network engineers to prioritize critical applications, manage congestion, and maintain predictable network performance. WRED, in particular, is effective in smoothing traffic flows and preventing abrupt queue drops, complementing CBWFQ, LLQ, and policing in a comprehensive QoS strategy.
Question 9:
Which command verifies the OSPF neighbor adjacency status on a Cisco router?
A) show ip ospf neighbor
B) show ip route ospf
C) show ip protocols
D) show ip ospf database
Answer: A) show ip ospf neighbor
Explanation:
The command show ip ospf neighbor displays the state of OSPF neighbor relationships. It shows the neighbor router ID, state (Full, 2-way, etc.), and priority. Understanding adjacency formation is critical for OSPF troubleshooting. Show ip route ospf displays routes, not adjacency. Show ip protocols shows routing protocol parameters, and show ip ospf database shows LSAs.
In OSPF (Open Shortest Path First) networks, monitoring and troubleshooting neighbor relationships is critical for maintaining proper routing. The command show ip ospf neighbor provides detailed information about OSPF neighbors, including the neighbor router ID, adjacency state (such as Full, 2-way, Init), interface, and priority. This information is essential for verifying that routers have successfully formed adjacencies, which is a prerequisite for exchanging Link-State Advertisements (LSAs) and building accurate routing tables.
Other OSPF-related commands serve complementary purposes. Show ip route ospf displays only the routes learned via OSPF that are installed in the routing table, but it does not provide information about neighbor states or adjacency formation. The show ip protocols command provides general routing protocol parameters, such as timers, networks advertised, and neighbors, but it does not show detailed adjacency states. The show ip ospf database displays the OSPF LSA database, which contains information about the network topology, but does not directly indicate whether neighbor relationships are fully formed or functioning correctly.
By using the show ip ospf neighbor command, network engineers can quickly identify issues such as mismatched hello/dead timers, authentication failures, or incorrect network types. This command is a key tool in OSPF troubleshooting, enabling engineers to ensure proper adjacency formation and maintain reliable, loop-free routing in multi-router environments.
Question 10:
Which type of NAT translates multiple private IP addresses to a single public IP using different port numbers?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Port Address Translation (PAT), also known as NAT overload, maps multiple private IP addresses to a single public IP by using different source ports. Static NAT maps a single private IP to a single public IP, while dynamic NAT maps private IPs to a pool of public IPs. NAT64 enables IPv6 communication with IPv4 networks.
Port Address Translation (PAT), also referred to as NAT overload, is a type of Network Address Translation that allows multiple private IP addresses within a local network to share a single public IP address for Internet access. PAT achieves this by assigning a unique source port to each outbound connection, enabling the router or firewall to differentiate between sessions. This method conserves public IP addresses, which is critical in networks with many hosts but a limited number of public IPs.
Other NAT types serve different purposes. Static NAT creates a one-to-one mapping between a private IP and a public IP, ensuring consistent address translation for services like web servers that need a fixed public IP. Dynamic NAT maps private IPs to a pool of available public IPs, but it requires enough public addresses to accommodate all users; once the pool is exhausted, additional hosts cannot access external networks. NAT64 is designed to enable communication between IPv6 and IPv4 networks, translating IPv6 addresses to IPv4 and vice versa, which is essential for transitioning to IPv6 environments.
PAT is widely used in enterprise and home networks due to its efficiency and scalability. By using port-based mapping, PAT allows numerous devices to access external networks simultaneously while maintaining proper session identification, making it a cornerstone of IPv4 address conservation and NAT deployment strategies.
Question 11:
Which mechanism does Cisco DNA Center use for policy enforcement at the access layer?
A) VXLAN
B) SDA Fabric
C) OSPF
D) EIGRP
Answer: B) SDA Fabric
Explanation:
Cisco Software-Defined Access (SDA) Fabric allows Cisco DNA Center to enforce policies dynamically at the access layer. It uses virtual networks (VN), Scalable Group Tags (SGTs), and LISP-based overlay to segment traffic, reduce manual configuration, and improve security. VXLAN is the underlying encapsulation method) OSPF and EIGRP are routing protocols and not policy enforcement tools.
Cisco Software-Defined Access (SDA) Fabric is a modern network architecture that enables centralized policy enforcement, segmentation, and automation across enterprise networks. SDA Fabric integrates with Cisco DNA Center to dynamically apply network policies at the access layer, reducing the need for manual configuration on individual switches. It uses virtual networks (VNs) to logically separate traffic for different departments or applications, and Scalable Group Tags (SGTs) to enforce role-based access policies. Traffic segmentation and security policies are applied consistently across the network, regardless of physical location, improving both security and operational efficiency.
The fabric relies on VXLAN (Virtual Extensible LAN) as the underlying encapsulation protocol to create overlay networks that span multiple access and distribution switches. VXLAN allows for flexible, scalable, and consistent traffic forwarding within the overlay, enabling SDA to support large-scale enterprise environments without requiring traditional VLAN configurations.
In contrast, traditional routing protocols like OSPF and EIGRP are used to exchange reachability information and compute optimal paths in the network. While essential for connectivity, they do not enforce security or policy. SDA Fabric combines overlay networking, SGTs, and DNA Center automation to provide centralized policy enforcement, simplified operations, and enhanced segmentation, making it a key tool for modern enterprise networks transitioning to software-defined architectures.
Question 12:
Which type of ACL can filter traffic based on both source and destination IP addresses and ports?
A) Standard ACL
B) Extended ACL
C) Named ACL
D) Reflexive ACL
Answer: B) Extended ACL
Explanation:
Extended ACLs allow filtering based on source/destination IP addresses, protocol, and port numbers, providing fine-grained control. Standard ACLs filter only by source IP. Named ACLs are identifiers for standard or extended ACLs, and reflexive ACLs provide temporary dynamic filtering for sessions.
Access Control Lists (ACLs) are essential tools for controlling network traffic by defining rules that permit or deny packets based on specific criteria. Extended ACLs provide fine-grained control over network traffic by filtering packets based on multiple parameters, including source and destination IP addresses, protocol types (TCP, UDP, ICMP), and port numbers. This allows network administrators to implement detailed security policies, such as permitting web traffic while denying file-sharing services from specific hosts or networks.
In contrast, Standard ACLs are simpler and filter traffic only based on the source IP address, limiting their flexibility for detailed traffic control. Named ACLs provide a human-readable identifier for both standard and extended ACLs, simplifying configuration, management, and troubleshooting by replacing numeric ACL IDs with descriptive names. Reflexive ACLs are dynamic and temporary, automatically creating session-based entries to allow return traffic for outbound connections while denying unsolicited inbound traffic.
By using extended ACLs, organizations can enforce specific security policies, control access to sensitive applications, and reduce exposure to potential threats. While standard ACLs provide basic filtering, extended ACLs combined with named and reflexive ACL features offer a comprehensive, scalable, and manageable approach to network security, allowing precise control over traffic flows in complex enterprise environments.
Question 13:
In an OSPF network, which LSA type describes external routes redistributed into OSPF?
A) Type 1
B) Type 2
C) Type 3
D) Type 5
Answer: D) Type 5
Explanation:
OSPF Type 5 LSAs are used to advertise routes from other routing protocols (external routes) into the OSPF domain. Type 1 and Type 2 describe router and network LSAs within an area. Type 3 is a summary LSA used by Area Border Routers to advertise networks to other areas. Type 5 LSAs are flooded to all areas except stub areas.
In OSPF (Open Shortest Path First), different Link-State Advertisement (LSA) types are used to describe network topology and share routing information. Type 1 LSAs, also called Router LSAs, are generated by every OSPF router to describe its interfaces, state, and neighbors within the same area. Type 2 LSAs, or Network LSAs, are generated by the Designated Router (DR) on broadcast and non-broadcast multi-access networks to describe all routers connected to that network segment. These LSAs help routers build a complete intra-area topology. Type 3 LSAs are summary LSAs created by Area Border Routers (ABRs) to advertise networks from one area to another, facilitating inter-area routing while reducing LSA flooding.
Type 5 LSAs are used to advertise external routes, such as routes redistributed from other routing protocols (e.g., EIGRP or BGP) into the OSPF domain. Unlike Type 3 LSAs, which summarize internal OSPF routes, Type 5 LSAs carry information about networks external to the OSPF autonomous system. These LSAs are flooded to all OSPF areas, except stub or not-so-stubby areas (which block Type 5 LSAs), ensuring that all routers have visibility into external destinations. By using Type 5 LSAs, OSPF integrates external routing information while maintaining loop-free, efficient routing within the OSPF domain.
Question 14:
Which technology provides secure Layer 2 encryption between Cisco devices on untrusted links?
A) IPsec
B) MACsec
C) GRE
D) MPLS
Answer: B) MACsec
Explanation:
MACsec (Media Access Control Security) provides encryption at Layer 2, securing traffic between devices on untrusted links. It prevents eavesdropping and ensures data integrity. IPsec operates at Layer 3, GRE is a tunneling protocol, and MPLS provides routing and VPNs, but not encryption.
MACsec (Media Access Control Security) is a Layer 2 security protocol that provides encryption, authentication, and integrity protection for Ethernet frames transmitted between network devices. By securing traffic at the data link layer, MACsec prevents eavesdropping, tampering, and man-in-the-middle attacks on untrusted or shared links, such as between switches or across enterprise LANs. Unlike higher-layer security mechanisms, MACsec protects all traffic on the link regardless of the protocol, making it ideal for securing sensitive communications inside the network.
Other protocols operate differently. IPsec provides encryption at Layer 3, securing IP packets end-to-end over potentially untrusted networks, such as the Internet. GRE (Generic Routing Encapsulation) is a tunneling protocol that encapsulates packets for transport across networks but does not provide encryption or authentication on its own. MPLS (Multiprotocol Label Switching) enables efficient routing and supports VPNs through label-based forwarding, but it does not inherently secure the payload.
By using MACsec, organizations can ensure that internal traffic between devices is confidential and tamper-proof, complementing higher-layer security protocols like IPsec. MACsec is particularly valuable for securing data center links, campus networks, and high-performance enterprise LANs, providing encryption with minimal impact on latency and maintaining the integrity of all transmitted frames at Layer 2.
Question 15:
Which Cisco feature allows for dynamic routing protocol redistribution between OSPF and EIGRP with controlled metrics?
A) Route Maps
B) Policy-Based Routing
C) Static Routing
D) IP SLA
Answer: A) Route Maps
Explanation:
Route maps allow conditional redistribution between routing protocols, applying metrics, tags, or filters. For example, redistributing EIGRP into OSPF may require setting a default metric to avoid routing loops. Policy-based routing (PBR) affects forwarding, static routing is fixed, and IP SLA monitors performance metrics.
Route maps are powerful tools in Cisco networks that provide conditional control over routing decisions and redistribution between different routing protocols. They allow administrators to match specific criteria, such as prefixes, interfaces, or route tags, and then apply actions like setting metrics, route tags, or next-hop addresses. A common use case is redistributing routes from one protocol into another, such as redistributing EIGRP routes into OSPF. Without proper route-map configuration, this redistribution could create routing loops or inconsistent metrics. By applying a default metric or modifying attributes via a route map, administrators ensure stable, predictable routing across the network.
Other routing tools have distinct functions. Policy-Based Routing (PBR) allows administrators to override normal routing decisions on a per-packet basis, directing traffic according to source, destination, or application rather than the routing table. Static routing provides a fixed path to a destination without dynamic updates, suitable for small or simple networks. IP SLA (Service-Level Agreement) is used to monitor network performance metrics, such as latency, jitter, or packet loss, enabling dynamic adjustments or failover triggers.
Together, route maps, PBR, static routes, and IP SLA allow for flexible, granular, and intelligent routing strategies. Route maps, in particular, are essential for safely redistributing routes and controlling routing policy across complex multi-protocol networks.
Question 16:
Which MPLS component assigns labels to packets entering the network?
A) LSR
B) LER
C) CE
D) P router
Answer: B) LER
Explanation:
Label Edge Routers (LERs) operate at the edge of an MPLS network and assign labels to packets based on FEC (Forwarding Equivalence Class). LSRs (Label Switch Routers) forward labeled packets within the core. CE (Customer Edge) devices are customer-side routers, and P routers are core routers that forward labels without adding/removing them.
In an MPLS (Multiprotocol Label Switching) network, different types of routers perform specific roles to efficiently forward traffic using labels instead of traditional IP routing. Label Edge Routers (LERs) sit at the edge of the MPLS network and are responsible for assigning labels to incoming packets based on their Forwarding Equivalence Class (FEC). LERs classify traffic and determine the appropriate MPLS label for each packet before it enters the MPLS core, or remove labels when traffic exits the network. This labeling enables fast, efficient forwarding across the MPLS backbone without examining the IP header at every hop.
Label Switch Routers (LSRs) operate within the MPLS core and forward packets based solely on their labels, swapping them as necessary to guide traffic toward its destination. P routers, or provider core routers, also reside in the MPLS backbone, performing high-speed label switching without adding or removing labels; they focus on forwarding labeled packets efficiently. Customer Edge (CE) devices are routers located at the customer’s side of the network, connecting to the MPLS provider but not participating in MPLS labeling or switching.
By distinguishing these roles, MPLS networks achieve scalability, fast packet forwarding, and flexible traffic engineering. LERs are crucial because they bridge the IP and MPLS worlds, ensuring packets are labeled for efficient transport while maintaining routing and policy control at the network edge.
Question 17:
Which routing protocol supports equal and unequal cost load balancing by default?
A) OSPF
B) EIGRP
C) RIP
D) BGP
Answer: B) EIGRP
Explanation:
EIGRP supports equal and unequal cost load balancing. The variance command allows EIGRP to include routes with higher metrics if they are within a factor of the best path. OSPF only performs equal-cost load balancing. RIP can do equal-cost load balancing, and BGP does not perform load balancing by default.
Enhanced Interior Gateway Routing Protocol (EIGRP) is a Cisco-proprietary hybrid routing protocol that supports both equal-cost and unequal-cost load balancing, providing flexibility in optimizing network traffic. By default, EIGRP uses the best path to a destination based on its composite metric, which includes bandwidth and delay. However, with the variance command, EIGRP can include routes with higher metrics in the routing table if they are within a configurable multiple of the best path metric. This allows traffic to be distributed over multiple paths, improving bandwidth utilization and redundancy while maintaining loop-free routing with the DUAL (Diffusing Update Algorithm).
In comparison, OSPF supports only equal-cost load balancing, forwarding traffic evenly across paths with the same cost, which can be limiting in networks with paths of differing quality. RIP, another distance-vector protocol, also supports equal-cost load balancing but is constrained by its maximum hop count and slower convergence. BGP, an exterior gateway protocol, does not perform load balancing by default; routing decisions are primarily based on path attributes such as AS_PATH, LOCAL_PREF, and MED, and traffic engineering requires additional configurations like multipath or route reflection policies.
EIGRP’s ability to perform unequal-cost load balancing makes it especially useful in enterprise networks where multiple paths of varying bandwidth and delay exist, enabling efficient use of available network resources while maintaining stability and fast convergence.
Question 18:
In a VXLAN deployment, which component maps Layer 2 segments over Layer 3 networks?
A) VTEP
B) L2SW
C) Router-on-a-Stick
D) CE
Answer: A) VTEP
Explanation:
VXLAN Tunnel Endpoints (VTEPs) encapsulate Layer 2 frames into VXLAN headers for transport over Layer 3 networks. VTEPs perform encapsulation/decapsulation, enabling virtual networks to span large Layer 3 domains. L2SW operates at Layer 2, Router-on-a-Stick provides inter-VLAN routing, and CE is customer-side equipment.
VXLAN (Virtual Extensible LAN) is a network virtualization technology that allows Layer 2 networks to extend over a Layer 3 infrastructure, enabling scalable and flexible segmentation for modern data centers. The key component in VXLAN is the VXLAN Tunnel Endpoint (VTEP), which is responsible for encapsulating Layer 2 Ethernet frames into VXLAN headers for transport over IP networks and then decapsulating them at the destination VTEP. This encapsulation allows virtual networks to span large Layer 3 domains, supporting multi-tenant environments and enabling seamless mobility of virtual machines across different subnets.
Other network components operate differently. A traditional Layer 2 switch (L2SW) forwards traffic based on MAC addresses but cannot extend VLANs across Layer 3 boundaries without additional protocols. Router-on-a-Stick is a technique used to provide inter-VLAN routing on a single physical interface by creating subinterfaces for each VLAN; it operates at Layer 3 but is limited in scalability compared to VXLAN overlays. Customer Edge (CE) devices are routers or switches located at the customer side of the network, connecting to a service provider or larger network, but they are not involved in VXLAN encapsulation.
By using VTEPs, VXLAN overlays enable scalable, isolated virtual networks over existing IP infrastructures, offering flexibility, traffic segmentation, and mobility that traditional L2/L3 designs cannot easily achieve, making it a cornerstone of modern software-defined data centers.
Question 19:
Which Cisco feature ensures uninterrupted wireless client roaming within the same WLAN?
A) 802.1X
B) Fast Secure Roaming (FSR)
C) FlexConnect
D) WLC High Availability
Answer: B) Fast Secure Roaming (FSR)
Explanation:
Fast Secure Roaming (802.11r) allows wireless clients to roam between access points with minimal latency, maintaining session continuity. FlexConnect allows branch APs to switch locally, while WLC HA ensures controller redundancy. 802.1X provides authentication, not roaming.
Fast Secure Roaming (FSR), also known as 802.11r, is a wireless network feature that allows clients to roam seamlessly between access points (APs) with minimal latency. In traditional Wi-Fi networks, roaming can cause delays because clients must reauthenticate with each new AP, interrupting real-time applications like voice or video. 802.11r addresses this by introducing fast transition mechanisms, enabling clients to pre-authenticate with neighboring APs before moving, reducing handoff time, and maintaining session continuity for applications sensitive to delays.
Other wireless technologies serve complementary purposes. FlexConnect allows branch APs to perform local switching, forwarding traffic directly to the LAN rather than tunneling it to the controller, which improves efficiency in remote or distributed deployments. WLC High Availability (HA) ensures redundancy of wireless LAN controllers, so if a primary controller fails, a secondary controller can take over without disrupting client connectivity. 802.1X, on the other hand, is an authentication protocol used to secure client access by validating credentials before granting network access, but it does not address roaming or handoff latency.
By implementing FSR/802.11r, organizations can support real-time applications, ensure smooth client mobility, and combine it with FlexConnect and WLC HA for a resilient, efficient, and secure wireless network infrastructure.
Question 20:
Which command displays all BGP routes learned from a specific neighbor?
A) show ip bgp neighbors <neighbor> routes
B) show ip bgp summary
C) show ip route bgp
D) show ip bgp
Answer: A) show ip bgp neighbors <neighbor> routes
Explanation:
The command show ip bgp neighbors <neighbor> routes lists all routes received from a particular BGP neighbor, including their status, attributes, and next-hop information. Show ip bgp summary provides neighbor state, show ip route bgp shows the routing table, and show ip bgp displays the BGP table in general without neighbor-specific filtering.
In BGP (Border Gateway Protocol) networks, monitoring and troubleshooting routes is essential for maintaining stable inter-domain connectivity. The command show ip bgp neighbors <neighbor> routes provides detailed information about all routes received from a specific BGP neighbor. It displays each route’s status, such as whether it is valid, best, or suppressed, along with BGP attributes like AS_PATH, LOCAL_PREF, MED, and NEXT_HOP. This command is crucial for network engineers to understand which routes are being advertised by neighbors, verify policy enforcement, and troubleshoot routing issues.
Other BGP-related commands provide complementary information. Show ip bgp summary gives an overview of all BGP neighbor relationships, including neighbor state (Idle, Active, Established), number of prefixes received, and uptime, but does not show the specific routes. Show ip route bgp displays BGP-learned routes that are installed in the router’s IP routing table, providing a high-level view of reachable destinations. Show ip bgp shows the entire BGP table on the router, listing all known prefixes and attributes, but without filtering for a specific neighbor.
By using show ip bgp neighbors <neighbor> routes, administrators can focus on a single neighbor’s advertisements, analyze route propagation, and verify BGP policies, making it an essential tool for precise troubleshooting and network optimization in complex BGP deployments.
In BGP (Border Gateway Protocol) networks, effectively monitoring and troubleshooting routes is essential to maintain stable inter-domain connectivity. The command show ip bgp neighbors <neighbor> routes provides a detailed view of all routes received from a specific BGP neighbor. It lists each route along with its status, such as whether it is valid, best, or suppressed. Additionally, it displays BGP attributes, including AS_PATH, LOCAL_PREF, MED, NEXT_HOP, and origin type, which are crucial for understanding route selection, path preference, and policy enforcement. This command allows network engineers to analyze exactly what routes a neighbor is advertising, verify that route filtering or redistribution policies are applied correctly, and troubleshoot issues related to reachability or suboptimal path selection.
Other BGP-related commands provide complementary insights. Show ip bgp summary offers a high-level overview of all BGP neighbor relationships, showing each neighbor’s state (Idle, Active, Established), the number of prefixes received, and session uptime, but it does not provide route-level detail. Show ip route bgp lists only the BGP routes that are installed in the router’s routing table, showing the effective paths used for forwarding traffic. Show ip bgp displays the entire BGP table with all known prefixes and attributes, but without filtering for a specific neighbor, making it less precise when investigating neighbor-specific issues.
By using show ip bgp neighbors <neighbor> routes, administrators gain a granular perspective on neighbor-specific route advertisements, which is critical in complex networks with multiple peers and policy-based routing decisions. It helps in verifying proper route propagation, troubleshooting connectivity issues, and ensuring that BGP policies, such as route maps and prefix lists, are functioning as intended. This command is indispensable for maintaining accurate, reliable, and policy-compliant BGP routing in large-scale networks.