Cisco 350-401 Implementing Cisco Enterprise Network Core Technologies (ENCOR) Exam Dumps and Practice Test Questions Set 6 Q 101-120 

Visit here for our full Cisco 350-401 exam dumps and practice test questions.

Question 101

A network engineer needs to configure OSPF to prevent routing loops in a multi-area network. Which OSPF area type blocks Type 3, 4, and 5 LSAs from entering the area while allowing only a default route?

A) Standard area

B) Stub area

C) Totally stubby area

D) Not-so-stubby area (NSSA)

Answer: C)

Explanation:

Totally stubby area is the most restrictive OSPF area type that blocks Type 3 summary LSAs, Type 4 ASBR summary LSAs, and Type 5 external LSAs from entering the area, while the Area Border Router injects only a default route to reach destinations outside the area. This configuration significantly reduces the routing table size and OSPF database overhead for routers within the area, making it ideal for branch offices or edge networks with limited resources. OSPF uses different LSA types to advertise routing information where Type 1 router LSAs describe router links within an area, Type 2 network LSAs represent multi-access networks, Type 3 summary LSAs advertise inter-area routes, Type 4 ASBR summary LSAs identify routers performing external redistribution, and Type 5 external LSAs describe routes redistributed from other protocols. In a standard area, all LSA types can exist allowing full routing visibility but requiring routers to maintain large routing databases. Stub areas reduce overhead by blocking Type 5 external LSAs while allowing Type 3 inter-area routes, suitable for areas not containing ASBRs. 

Totally stubby areas go further by blocking both external LSAs and inter-area summary LSAs, replacing them with a single default route advertisement from the ABR. This means routers in totally stubby areas only know about routes within their own area plus a default route for everything else. The configuration dramatically reduces memory and CPU requirements because routing tables contain only local area routes plus default, and OSPF database size is minimized. Traffic destined outside the area follows the default route to the ABR which has full routing information and forwards appropriately. The trade-off is loss of specific path information where all external traffic uses the single default path regardless of whether more optimal paths might exist. Totally stubby areas are Cisco proprietary enhancement to standard OSPF, while RFC-defined stub areas allow Type 3 LSAs. Configuration involves designating the area as totally stubby on the ABR using the “area X stub no-summary” command, and configuring regular stub on other routers in the area. The totally stubby area cannot contain ASBRs performing redistribution and cannot be the backbone area 0. This design is optimal for simple branch locations with single uplinks where detailed routing information provides no benefit and resource conservation is priority.

Question 102

An engineer is implementing QoS to prioritize voice traffic over a WAN link. Which QoS mechanism classifies and marks packets at the network edge for priority treatment throughout the network?

A) Queuing only

B) Classification and marking

C) Shaping exclusively

D) Policing without marking

Answer: B)

Explanation:

Classification and marking is the foundational QoS mechanism that identifies traffic types at the network edge and applies markers to packet headers enabling consistent priority treatment as packets traverse the network without requiring deep inspection at every device. QoS implementations follow a trust boundary model where traffic is classified and marked as close to the source as possible, then network devices trust these markings and provide differentiated treatment based on marked values. Classification uses various packet characteristics to identify traffic types including Layer 3 and 4 information such as source and destination IP addresses, protocol types, and port numbers, DSCP or IP Precedence values already marked in packets, and deep packet inspection examining application signatures for sophisticated identification. 

Once traffic is classified, marking applies priority indicators to packet headers. For Layer 3 marking, the DSCP (Differentiated Services Code Point) field in the IP header uses 6 bits providing 64 possible values, with common markings including EF (Expedited Forwarding) for voice traffic requiring low latency and jitter, AF (Assured Forwarding) classes for business-critical data with different drop precedencies, and Best Effort for standard traffic. Layer 2 marking uses CoS (Class of Service) in the 802.1Q VLAN tag providing 8 priority levels. Voice traffic typically receives DSCP EF marking (decimal 46) or CoS 5, video conferencing receives AF41 or CoS 4, business-critical data receives AF31 or CoS 3, and bulk data receives lower priorities. The classification and marking strategy must be consistent network-wide because inconsistent markings cause unpredictable QoS behavior. Edge devices like access switches or routers perform classification using detailed information available about local traffic sources, applying trusted markings. 

Core network devices then trust these markings and queue or forward packets according to marked priorities without re-classifying. This approach scales efficiently because core devices perform simple marking-based forwarding rather than complex per-flow inspection. The marking persists throughout the packet’s journey until it reaches the destination enabling end-to-end QoS. Configuration involves creating class maps defining classification criteria, policy maps associating classes with marking actions, and service policies applying these policies to interfaces. Voice traffic identification commonly uses UDP ports 16384-32767 for RTP streams and TCP port 5060-5061 for SIP signaling, though application-based classification using NBAR provides more accurate identification of proprietary VoIP protocols.

Question 103

A network administrator needs to implement a first-hop redundancy protocol that provides load balancing across multiple gateways. Which protocol supports active-active gateway operation?

A) HSRP (Hot Standby Router Protocol)

B) VRRP (Virtual Router Redundancy Protocol)

C) GLBP (Gateway Load Balancing Protocol)

D) STP (Spanning Tree Protocol)

Answer: C)

Explanation:

GLBP (Gateway Load Balancing Protocol) is the Cisco proprietary first-hop redundancy protocol that provides both gateway redundancy and load balancing by allowing multiple routers to simultaneously forward traffic for the same virtual IP address, utilizing available bandwidth more efficiently than active-standby protocols. Traditional redundancy protocols like HSRP and VRRP operate in active-standby mode where one router actively forwards traffic while others remain idle until failover occurs, wasting the standby routers’ forwarding capacity. GLBP addresses this limitation through a unique architecture using one Active Virtual Gateway (AVG) and up to four Active Virtual Forwarders (AVFs). 

The AVG is responsible for answering ARP requests from hosts requesting the gateway MAC address, but instead of providing a single MAC address, GLBP uses virtual MAC addresses where each participating router is assigned a unique virtual MAC address. When hosts ARP for the virtual gateway IP address, the AVG responds with different virtual MAC addresses in round-robin, weighted, or host-dependent fashion, distributing gateway responsibilities across multiple routers. Each router receiving a virtual MAC address assignment becomes an AVF and forwards traffic for that MAC address. This creates true load balancing where multiple routers simultaneously forward traffic for different hosts even though all hosts use the same gateway IP address. GLBP supports multiple load-balancing algorithms including round-robin where virtual MAC addresses are distributed equally, weighted where routers with higher capacity receive more assignments, and host-dependent where specific hosts consistently receive the same virtual MAC address maintaining session persistence. The protocol provides sub-second failover where if an AVF fails, another router assumes its virtual MAC address and continues forwarding for affected hosts. 

If the AVG fails, another router is elected AVG through priority and preemption mechanisms similar to HSRP. GLBP uses multicast address 224.0.0.102 and UDP port 3222 for hello messages with default timers of 3-second hello and 10-second hold time. Configuration involves enabling GLBP on interfaces with the same group number, configuring virtual IP address, optionally setting priorities and load-balancing methods, and configuring preemption if desired. The protocol integrates with interface tracking where routers reduce their priority if tracked interfaces fail, triggering AVG or AVF role changes. GLBP is ideal for environments with multiple routers and bandwidth optimization requirements, providing better resource utilization than active-standby alternatives.

Question 104

An engineer needs to configure route redistribution between EIGRP and OSPF. Which metric consideration is critical when redistributing routes between these protocols?

A) Using default metrics without adjustment

B) Setting appropriate seed metrics for each protocol

C) Ignoring metric compatibility issues

D) Redistributing without metric configuration

Answer: B)

Explanation:

Setting appropriate seed metrics for each protocol is critical when redistributing routes between EIGRP and OSPF because these protocols use fundamentally incompatible metric calculations that cannot be directly translated, requiring administrators to assign meaningful seed metrics preventing routing issues. EIGRP uses a composite metric calculated from bandwidth, delay, reliability, load, and MTU with bandwidth and delay being the default components, while OSPF uses cost based on interface bandwidth with cost calculated as reference bandwidth divided by interface bandwidth. These metric systems are completely incompatible and cannot be automatically converted. When redistributing routes from one protocol into another without specifying metrics, different behaviors occur depending on protocol and Cisco IOS version. 

OSPF assigns redistributed routes a default metric of 20 for most protocols or 1 for BGP routes, while EIGRP requires explicit metric configuration or it will not redistribute routes at all. Without proper seed metrics, problems arise including suboptimal routing where default metrics don’t reflect actual path quality, routing loops when mutual redistribution occurs without careful metric manipulation, and route preference issues where redistributed routes are preferred over legitimate paths due to better metrics. Best practice seed metric configuration requires understanding the routing domain topology and selecting metrics that reflect relative path quality. For OSPF receiving EIGRP routes, administrators should assign costs reflecting the path quality where routes to distant networks receive higher costs than routes to nearby networks, using values that integrate properly with native OSPF costs in the network. 

For EIGRP receiving OSPF routes, administrators must specify all five metric components typically using “default-metric bandwidth delay reliability load MTU” command with values representing the slowest link characteristics in the redistributed domain. Administrative distance also plays a crucial role where EIGRP uses AD 90 for internal routes and 170 for external, while OSPF uses 110 for all routes, meaning EIGRP internal routes are naturally preferred over OSPF routes which could cause issues during mutual redistribution. Route tagging should be implemented to prevent redistribution loops by tagging routes during redistribution and filtering tagged routes from being redistributed back into the source protocol using route maps. Summarization at redistribution points reduces routing table size and instability.

Question 105

A network administrator needs to implement VRF-Lite to segment customer traffic on a shared infrastructure without MPLS. Which statement correctly describes VRF-Lite functionality?

A) Requires MPLS labels for operation

B) Creates separate routing tables per VRF instance

C) Uses single global routing table

D) Only works with BGP routing

Answer: B)

Explanation:

VRF-Lite creates separate routing tables per VRF instance enabling traffic segmentation on a single physical router without requiring MPLS infrastructure, providing logical network separation ideal for multi-tenant environments, managed service providers, or enterprise network segmentation. VRF (Virtual Routing and Forwarding) technology virtualizes the routing function creating multiple independent routing table instances on a single device where each VRF maintains its own routing table, CEF forwarding table, and routing protocol processes completely isolated from other VRFs. This isolation means routes in one VRF are invisible to other VRFs even if they use overlapping IP address spaces, enabling multiple customers or departments to use the same private IP ranges without conflict. VRF-Lite is the implementation of VRF without MPLS labels, using interface-based segregation where interfaces are assigned to specific VRF instances, and all traffic arriving on an interface belongs to that VRF. 

Configuration involves creating VRF definitions specifying VRF name and optionally route distinguisher for identification, assigning interfaces to VRFs using “ip vrf forwarding” command which clears existing IP configuration requiring IP address reconfiguration, configuring routing protocols per VRF where each VRF can run independent routing protocol instances, and optionally configuring route leaking between VRFs when selective communication between VRFs is required. Common use cases include service provider customer separation where each customer receives a dedicated VRF preventing visibility into other customers’ networks, enterprise departmental isolation separating IT, finance, and guest networks at the network layer, and security zoning creating isolated segments for different security levels. VRF-Lite supports all routing protocols including EIGRP, OSPF, and BGP with each VRF running independent protocol instances. BGP is particularly useful for VRF implementations as it supports VPNv4 address family and route distinguishers natively. 

Route leaking allows controlled communication between VRFs through route import and export using route targets or route maps, enabling scenarios like shared services where multiple VRFs access common resources such as DNS or authentication servers in a separate shared services VRF. The primary limitation of VRF-Lite compared to MPLS VPN is that each router in the path must be VRF-aware and have appropriate VRF configuration, whereas MPLS VPN allows VRF-unaware P routers in the core. VRF-Lite does not use MPLS labels for forwarding but relies on VRF context from ingress interface, making it simpler to deploy but less scalable for large provider networks.

Question 106

An engineer is troubleshooting EIGRP neighbor adjacency issues. Which condition would prevent EIGRP neighbors from forming adjacency?

A) Different hold timers configured

B) Mismatched autonomous system numbers

C) Different hello intervals configured

D) Varying interface bandwidth values

Answer: B)

Explanation:

Mismatched autonomous system numbers prevent EIGRP neighbors from forming adjacency because EIGRP routers only establish neighbor relationships with routers in the same autonomous system, making AS number one of the fundamental parameters that must match. EIGRP neighbor adjacency depends on several requirements that must be satisfied for successful neighbor formation. The AS number is the primary identifier of an EIGRP routing domain configured using the “router eigrp AS-number” command, and routers only process hello packets from neighbors with matching AS numbers. Other critical requirements include primary IP addresses on connected interfaces being in the same subnet ensuring Layer 3 connectivity, matching K-values which are the metric weight constants used in EIGRP metric calculation, and matching authentication configuration if authentication is enabled both routers must use the same authentication key and method. Interestingly, several parameters do not need to match including hello and hold timers where routers accept different values and each router uses its own configured timers, interface bandwidth settings which affect metric calculation but not adjacency formation, and router IDs which are used for internal identification but not for adjacency validation. 

When troubleshooting EIGRP adjacencies, systematic verification includes confirming interfaces are in up/up state using “show ip interface brief”, verifying IP addressing with “show ip interface” ensuring neighbors are in the same subnet, checking AS number consistency by examining “show ip protocols” output, verifying there are no ACLs blocking EIGRP multicast traffic to 224.0.0.10 or unicast EIGRP updates, confirming authentication configuration matches if implemented, and verifying K-values match using “show ip protocols” which displays metric weights. Common issues preventing adjacency include passive interface configuration where “passive-interface” command prevents hello packets transmission, mismatched subnets from configuration errors, EIGRP not enabled on interfaces due to missing network statements, and authentication failures from key mismatches or key chain timing issues. 

Debugging tools include “debug eigrp packets” showing hello packet exchange and providing insight into why adjacencies fail, and “show ip eigrp neighbors” displaying established neighbors with their hold time, uptime, queue counts, and sequence numbers. The neighbor table states include pending where neighbor relationship is initializing, or active after successful adjacency establishment. Proper neighbor relationship is evidenced by low queue counts indicating smooth packet exchange and hold times resetting periodically as hello packets are received.

Question 107

A network administrator needs to configure port security on access switches to prevent unauthorized devices. Which violation mode drops traffic from unauthorized MAC addresses while keeping the port operational?

A) Shutdown mode

B) Restrict mode

C) Protect mode

D) Default mode

Answer: B)

Explanation:

Restrict mode is the port security violation mode that drops traffic from unauthorized MAC addresses while keeping the port operational and generating SNMP traps and syslog messages providing notification of security violations without the port disruption caused by shutdown mode. Port security is a Layer 2 security feature that limits which MAC addresses can send traffic through a switchport, preventing unauthorized devices from connecting and mitigating MAC flooding attacks. The feature works by learning or configuring allowed MAC addresses up to a maximum count specified by administrators, then monitoring all frames received on the port and taking action when frames arrive from MAC addresses not in the allowed list. Three violation modes determine what happens when unauthorized MAC addresses are detected. Shutdown mode immediately places the port into error-disabled state requiring administrator intervention to recover through “shutdown” followed by “no shutdown” commands or automatic recovery via error-disable recovery, this mode provides maximum security but causes operational disruption affecting legitimate traffic if violations occur. Restrict mode drops frames from unauthorized MAC addresses while allowing frames from authorized addresses to pass, increments the security violation counter, and sends SNMP trap and syslog notification enabling security monitoring without port downtime. Protect mode silently drops unauthorized frames and allows authorized frames without any notification or logging, providing security enforcement without alerting or statistics. Restrict mode balances security with availability making it suitable for environments where security violations should be prevented and monitored but port downtime is unacceptable. 

Configuration involves enabling port security with “switchport port-security” command, setting maximum MAC address count with “switchport port-security maximum” command typically 1 for user ports or higher for IP phones with PC pass-through, configuring violation action with “switchport port-security violation restrict”, and specifying MAC learning method either dynamic learning where the switch learns MAC addresses from traffic, sticky learning where learned addresses are added to running configuration and can be saved for persistent security, or static configuration where administrators manually specify allowed MAC addresses. Additional options include aging to remove learned addresses after inactivity periods and limiting the MAC address type to secure-configured, secure-dynamic, or secure-sticky. Common implementation includes combination with voice VLANs where maximum of 2-3 addresses allows IP phone plus PC connection while restrict mode handles violations from hubs or unauthorized switches connected by users. Verification uses “show port-security interface” displaying security configuration and violation counts, and “show port-security address” listing learned MAC addresses per port.

Question 108

An engineer is implementing StackWise technology on Cisco Catalyst switches. Which benefit does StackWise provide compared to traditional standalone switches?

A) Increased port density only

B) Single management plane with redundancy

C) Lower cost per port

D) Reduced power consumption

Answer: B)

Explanation:

Single management plane with redundancy is the primary benefit of StackWise technology where multiple physical switches operate as a single logical switch with unified configuration, management, and control plane while providing hardware redundancy for high availability. StackWise creates a stack of switches interconnected through dedicated stacking cables forming a high-bandwidth backplane ring topology where switches can forward traffic between any ports in the stack at full line rate. The stack operates with one active switch serving as the stack master providing the control plane and management functions, and one standby switch ready to assume master role if the active master fails, while remaining stack members are subordinate switches providing additional ports but not participating in control plane operations. 

This architecture provides several advantages including simplified management where administrators configure and manage the entire stack as a single device with one IP address eliminating the need to configure each switch individually, unified software version across all stack members simplifying upgrades and ensuring consistency, increased bandwidth where the stacking backplane provides high-throughput connectivity between switches typically 80-480 Gbps depending on generation, and enhanced availability through control plane redundancy where standby switch assumes master role with minimal disruption if active master fails. StackWise Virtual extends this concept using standard Ethernet links instead of dedicated stacking cables enabling distributed chassis across different locations. Configuration is maintained in the stack master and synchronized to all members, so adding new switches to the stack automatically provisions them with appropriate configuration. 

Stack election determines which switch becomes master based on configured priority values with higher priority preferred, existing master status with current master retaining role unless overridden by higher priority, and switch MAC address as the final tiebreaker. Best practices include assigning manual priorities ensuring predictable master election, using switches with matching software versions to avoid version mismatch conditions, implementing redundant stacking cables in ring topology for path redundancy, and planning stack numbering carefully since renumbering requires switch reloads. Limitations include proprietary nature requiring same switch model families for stacking compatibility, maximum stack size limits typically 8-9 switches, and potential total stack failure if stacking cables fail breaking the ring without redundancy. The single management plane dramatically reduces operational overhead compared to managing multiple standalone switches, making StackWise ideal for access layer deployments.

Question 109

A network administrator needs to configure AAA authentication for network device access. Which AAA method list provides fallback authentication if the RADIUS server is unavailable?

A) RADIUS only method

B) Method list with RADIUS then local

C) Local authentication only

D) No authentication configured

Answer: B)

Explanation:

Method list with RADIUS then local provides robust AAA authentication with failover capability where network devices first attempt authentication against RADIUS servers, but automatically fall back to local username database if RADIUS servers are unreachable, ensuring administrative access remains available during server failures. AAA (Authentication, Authorization, and Accounting) provides centralized security management for network device access with authentication verifying user identity, authorization determining what authenticated users can do, and accounting tracking user activities for audit purposes. Method lists define the sequence of authentication methods attempted when users login, specified in order of preference with subsequent methods tried if earlier methods are unavailable. 

The “RADIUS then local” configuration means devices first attempt RADIUS authentication by sending Access-Request to configured RADIUS servers containing username and password, and if RADIUS servers respond with Access-Accept authentication succeeds using RADIUS, but if RADIUS servers are unreachable indicated by timeout expiring without response, the device automatically tries local authentication checking username and password against locally configured user accounts. This fallback mechanism prevents lockout situations where RADIUS server failures would otherwise deny all administrative access including access needed to troubleshoot the RADIUS connectivity problem. Configuration involves defining custom method lists or modifying the default method list, for example “aaa authentication login default group radius local” creates a default method list using RADIUS group first and local database second. The “default” keyword applies this method list to all login attempts unless specific method lists override it for particular line types. Named method lists provide granular control applying different authentication sequences to different access methods, configured with “aaa authentication login list-name group radius local” then applied to specific lines with “login authentication list-name”. Important distinctions exist between RADIUS server responding with Access-Reject which fails that method immediately and tries the next method, versus RADIUS server being unreachable which times out then tries the next method. 

For proper failover behavior, administrators should configure appropriate timeout values balancing quick failover against legitimate network delays, typically 5-10 seconds. Multiple RADIUS servers can be configured providing redundancy within the RADIUS method before falling back to local, attempted in configured order until one responds or all timeout. Authorization method lists similarly define authorization source sequences, commonly using “aaa authorization exec default group radius local” for EXEC mode authorization and “aaa authorization commands 15 default group radius local” for privilege level 15 command authorization.

Question 110

An engineer needs to implement IPv6 addressing on the network. Which type of IPv6 address is automatically configured using Modified EUI-64 format?

A) Manually configured global unicast address

B) SLAAC-generated link-local address

C) Anycast address

D) Multicast address

Answer: B)

Explanation:

SLAAC-generated link-local address uses the Modified EUI-64 format to automatically derive the 64-bit interface identifier portion of the IPv6 address from the device’s MAC address, enabling zero-configuration IPv6 connectivity without DHCP or manual configuration. IPv6 addressing includes multiple address types with different scopes and purposes. Link-local addresses in the FE80::/10 range are mandatory on all IPv6-enabled interfaces and used for communication on the local link, neighbor discovery, and router advertisements. These addresses are automatically generated when IPv6 is enabled on an interface using SLAAC (Stateless Address Autoconfiguration), which creates the interface identifier through Modified EUI-64 process. This process takes the 48-bit MAC address, splits it into two 24-bit halves, inserts FFFE hexadecimal in the middle creating a 64-bit value, then inverts the 7th bit (the Universal/Local bit) of the first byte. For example, MAC address 0012.3456.789A becomes interface identifier 0212:34FF:FE56:789A after bit inversion. The complete link-local address combines FE80::/64 prefix with this interface identifier resulting in FE80::0212:34FF:FE56:789A. 

While link-local addresses always use Modified EUI-64 by default, global unicast addresses can optionally use this method depending on configuration. SLAAC for global addresses uses Router Advertisement messages from IPv6 routers containing network prefixes, and hosts combine the advertised prefix with their EUI-64-derived interface identifier to create global unicast addresses without administrator intervention. However, privacy concerns with EUI-64 led to RFC 7217 privacy extensions where interface identifiers are randomly generated rather than derived from MAC addresses, preventing device tracking across networks. Manual IPv6 configuration allows administrators to specify complete addresses without Modified EUI-64, often preferred for servers and network devices requiring static predictable addresses. 

Cisco IOS routers support multiple IPv6 address configuration methods including “ipv6 address prefix/prefix-length eui-64” explicitly using Modified EUI-64, “ipv6 address address/prefix-length” for fully manual specification, and “ipv6 address autoconfig” for SLAAC on router interfaces. The Modified EUI-64 format ensures uniqueness when using globally unique MAC addresses from IEEE assignment, as MAC addresses are guaranteed unique per manufacturer. Verification uses “show ipv6 interface” displaying all IPv6 addresses on interfaces including link-local, global unicast, and multicast addresses, while “show ipv6 neighbors” displays neighbor discovery cache similar to IPv4 ARP table showing link-local addresses of neighbors.

Question 111

A network administrator is implementing VLAN hopping attack prevention. Which configuration prevents VLAN hopping attacks on access ports?

A) Enabling trunk on all ports

B) Disabling DTP and setting mode to access

C) Using native VLAN tagging only

D) Enabling all VLANs on trunks

Answer: B)

Explanation:

Disabling DTP and explicitly setting switchport mode to access prevents VLAN hopping attacks by eliminating dynamic trunking negotiation and ensuring access ports cannot be tricked into becoming trunk ports, which attackers exploit to inject traffic into unauthorized VLANs. VLAN hopping is a Layer 2 security attack with two primary variants. Switch spoofing attack occurs when attackers send DTP (Dynamic Trunking Protocol) negotiation frames attempting to establish a trunk with the switch, and if successful, the attacker’s port becomes trunk gaining access to all VLANs allowed on that trunk rather than being confined to a single access VLAN. Double-tagging attack exploits native VLAN behavior where attackers send frames with two 802.1Q tags, the outer tag matching the native VLAN is stripped by the first switch, and the inner tag causes forwarding to the target VLAN on downstream switches. Preventing these attacks requires multiple security configurations. Disabling DTP completely with “switchport nonegotiate” command prevents dynamic trunk negotiation ensuring ports cannot be negotiated into trunk mode, explicitly configuring access ports with “switchport mode access” command prevents manual or negotiated trunk conversion, disabling unused ports with “shutdown” reduces attack surface by eliminating potential entry points, and placing unused ports in dedicated unused VLAN isolates any compromised unused ports. 

For trunk port security, explicitly configuring trunks with “switchport mode trunk” eliminates negotiation vulnerabilities, using “switchport trunk allowed vlan” command to permit only required VLANs follows principle of least privilege reducing VLAN exposure, and configuring native VLAN to unused VLAN with “switchport trunk native vlan XXX” using VLAN number not assigned to any access ports prevents double-tagging attacks. Additional native VLAN security involves tagging native VLAN frames with “vlan dot1q tag native” command eliminating untagged frame handling that double-tagging exploits. Private VLANs provide isolation between ports in the same VLAN preventing lateral movement even within a compromised VLAN. Port security limits MAC addresses per port preventing MAC flooding attacks that could disable VLAN segmentation. DHCP snooping, Dynamic ARP Inspection, and IP Source Guard form a security foundation that, while not directly preventing VLAN hopping, provide defense-in-depth against attackers who successfully access unauthorized VLANs. Best practice security configuration involves systematic hardening where all access ports are explicitly configured as access mode with DTP disabled, all trunk ports are explicitly configured as trunk with limited VLAN allowance and secured native VLAN, and regular auditing verifies no ports are in dynamic desirable or auto modes which enable negotiation vulnerabilities.

Question 112

An engineer is configuring SD-Access fabric. Which component serves as the mapping database storing endpoint-to-location mappings?

A) Fabric edge node

B) Fabric border node

C) Control plane node (LISP Map Server)

D) Fabric intermediate node

Answer: C)

Explanation:

Control plane node functioning as LISP Map Server serves as the centralized mapping database in SD-Access fabric, storing authoritative endpoint-to-location mappings that enable location-independent addressing and overlay transport across the fabric. SD-Access is Cisco’s software-defined access solution built on DNA Center automation, implementing policy-based segmentation and automated provisioning using VXLAN overlay for data plane transport, LISP for control plane mapping, and TrustSec for segmentation enforcement. The fabric architecture separates control plane from data plane with distinct node roles. Fabric edge nodes are access layer switches where endpoints connect, performing VXLAN encapsulation for fabric communication, registering connected endpoints with control plane, and enforcing segmentation policy based on Scalable Group Tags. Fabric border nodes connect the SD-Access fabric to external networks including traditional networks, WAN connections, and data centers, performing route leakage between fabric and external domains and handling VXLAN to non-VXLAN translation. Fabric intermediate nodes are typically distribution and core switches providing underlay connectivity and VXLAN transport without endpoint attachment or control plane functions. Control plane nodes run LISP Map Server and Map Resolver functions providing the critical mapping service. LISP (Locator/Identifier Separation Protocol) separates endpoint identity (EID – Endpoint Identifier) from location (RLOC – Routing Locator) enabling mobility and scalable routing. When endpoints connect to fabric edge nodes, the edge registers the endpoint’s EID (typically IP address) along with the edge node’s RLOC (underlay IP address) with the LISP Map Server creating an EID-to-RLOC mapping. When traffic needs to reach an endpoint, the source fabric edge queries the Map Server requesting the destination EID’s location, the Map Server responds with the RLOC of the fabric edge where the destination is connected, and the source edge encapsulates traffic in VXLAN with the destination RLOC enabling direct overlay communication. 

This architecture provides location transparency where endpoints can move between fabric edges without address changes, scalability by eliminating per-endpoint state in intermediate nodes concentrating mapping intelligence in control plane, and optimal traffic patterns avoiding suboptimal routing through centralized points. The Map Server database is replicated across multiple control plane nodes for redundancy with any Map Server able to answer queries. SD-Access uses segments (Virtual Networks) similar to VRFs for traffic isolation, and the control plane maintains separate mapping databases per segment. DNA Center serves as the management and provisioning system, automatically configuring fabric nodes with appropriate roles, generating VXLAN configurations, programming policy, and integrating with ISE for identity services.

Question 113

A network administrator needs to implement Cisco DNA Center for network automation. Which southbound protocol does DNA Center primarily use to configure network devices?

A) SNMP only

B) NETCONF with YANG models

C) CLI scripting exclusively

D) Telnet commands

Answer: B)

Explanation:

NETCONF with YANG models is the primary southbound protocol DNA Center uses to configure and manage network devices, providing structured, programmatic device interaction that enables automation, validation, and version control impossible with traditional CLI-based management. DNA Center is Cisco’s intent-based networking controller providing centralized management, automation, and assurance for enterprise networks. Southbound interfaces connect controllers to network infrastructure, and DNA Center employs multiple southbound protocols for different purposes with NETCONF being preferred for configuration management. NETCONF (Network Configuration Protocol) is an IETF standard using XML-encoded RPC over SSH providing structured device interaction with capabilities including configuration management with separate running and candidate configurations, transaction-based configuration changes ensuring atomicity where entire configurations apply or roll back on failure, and validation ensuring configuration syntax correctness before application. 

YANG (Yet Another Next Generation) data modeling language defines the structure and constraints of configuration and operational data, providing standardized vendor-neutral device models. Cisco develops YANG models for IOS-XE, IOS-XR, and NX-OS defining all configurable parameters and operational state information. DNA Center uses NETCONF to send configuration changes expressed as YANG model instances to devices, receiving back structured responses indicating success or failure with specific error information when problems occur. This approach provides numerous advantages over CLI automation including structured data eliminating text parsing and regular expressions, validation catching configuration errors before device application, transactional behavior preventing partial configurations, and API-driven workflows enabling integration with external systems. DNA Center’s automation capabilities leverage NETCONF/YANG for Intent-Based Networking where administrators define high-level business intent such as required network access policy, DNA Center translates intent into specific device configurations across potentially hundreds of devices, pushes configurations via NETCONF ensuring consistency, and continuously monitors to verify actual network state matches intended state. 

Additional southbound protocols include REST APIs for certain device interactions, SNMP for monitoring and telemetry collection, SSH with CLI for legacy device support and operations not yet covered by YANG models, and Telemetry using NETCONF or gRPC for streaming operational data. The shift from CLI to model-driven programmability represents fundamental network management evolution, though DNA Center maintains CLI capability for backward compatibility and edge cases where YANG models don’t cover required configurations. Device requirements for DNA Center management include running appropriate software versions supporting NETCONF and YANG, having necessary device packages installed, and enabling NETCONF service on devices.

Question 114

An engineer is implementing multicast routing in the network. Which PIM mode uses a shared tree rooted at a rendezvous point for initial multicast traffic forwarding?

A) PIM Dense Mode

B) PIM Sparse Mode

C) PIM Bidirectional Mode

D) PIM Source-Specific Mode

Answer: B)

Explanation:

PIM Sparse Mode uses a shared tree architecture initially rooted at a Rendezvous Point (RP) where all multicast sources register and all receivers join, creating a centralized distribution tree before optionally switching to source-specific shortest path trees for optimized forwarding. PIM (Protocol Independent Multicast) is a multicast routing protocol family with different modes suited for different network topologies and traffic patterns. PIM Sparse Mode is designed for networks where multicast receivers are sparsely distributed and not all network segments have interested receivers, making it the most widely deployed PIM variant. The operation involves several phases. Initially, when a receiver wants to join a multicast group, it sends an IGMP membership report to its directly connected router (first-hop router), which then sends a PIM Join message toward the RP for that multicast group hopping router-by-router building a shared tree from receiver to RP. When a multicast source begins sending, its first-hop router encapsulates multicast packets in unicast Register messages sent directly to the RP, and the RP decapsulates these packets and forwards them down the shared tree to all receivers. 

This creates a distribution path from source through RP to receivers called the shared tree or rendezvous point tree (RPT). For optimization, once traffic flows, the receiver’s first-hop router (last-hop router) can initiate a switch to shortest path tree (SPT) by sending a Join message directly toward the source creating a source-specific tree bypassing the RP, typically triggered when traffic rate exceeds a threshold. The SPT provides optimal paths but requires per-source state in routers. The RP is critical infrastructure requiring careful placement and redundancy. RP configuration methods include static RP where administrators manually configure RP addresses on all routers, Auto-RP which is Cisco proprietary using dense mode flooding to distribute RP mapping, and Bootstrap Router (BSR) which is standards-based using BSR election and RP candidate advertisement. Multiple RPs can be configured for redundancy and load distribution with different groups assigned to different RPs. Anycast RP provides RP redundancy using MSDP or PIM Anycast RP techniques. 

PIM Sparse Mode scales well because it only forwards multicast traffic on branches with active receivers avoiding the flooding behavior of dense mode, uses explicit joins rather than prune messages reducing control traffic, and leverages the RP as a central meeting point simplifying source-receiver rendezvous. Configuration involves enabling multicast routing globally, configuring PIM sparse mode on all interfaces carrying multicast traffic, and configuring RP either statically or through Auto-RP/BSR mechanisms.

Question 115

A network administrator needs to implement network automation using Python. Which Python library provides programmatic access to Cisco devices using NETCONF?

A) Paramiko for SSH only

B) ncclient for NETCONF operations

C) Requests for HTTP only

D) Telnetlib for Telnet access

Answer: B)

Explanation:

ncclient is the Python library specifically designed for NETCONF operations providing a high-level programmatic interface to network devices supporting NETCONF protocol, enabling automation scripts to retrieve configuration, modify settings, and query operational state using standardized methods. Network automation increasingly relies on programmatic interfaces replacing manual CLI-based management with code-driven approaches that improve consistency, speed, and scalability. Python has emerged as the predominant language for network automation due to its readability, extensive library ecosystem, and strong community support. The ncclient library implements the NETCONF protocol client side allowing Python scripts to interact with NETCONF-enabled devices. The library handles NETCONF session establishment over SSH, capability exchange during session setup, RPC message construction and transmission, XML parsing of device responses, and error handling for network and protocol issues. 

Common ncclient operations include establishing connections to devices specifying hostname, port, username, and authentication credentials, retrieving configuration using get_config method specifying the configuration datastore such as running, candidate, or startup and optionally filtering returned data, editing configuration using edit_config method sending YANG-modeled configuration changes with specified operations like merge, replace, or delete, committing changes in devices supporting candidate configuration, and retrieving operational state using get method for non-configuration data. Example usage involves importing the manager class from ncclient, establishing connection with manager.connect specifying device parameters, sending RPC operations using manager methods which return XML responses, parsing responses using ElementTree or similar XML parsing, and closing connections properly with manager.close_session. 

The library supports filtering using NETCONF filters allowing scripts to retrieve only relevant data subsets rather than entire configuration, significantly improving performance. YANG models define the structure of data exchanged, and scripts should construct XML conforming to device YANG models. Error handling is critical because network operations can fail due to connectivity issues, authentication failures, or invalid configuration data, and proper exception handling ensures scripts respond gracefully. Alternative Python libraries serve different purposes where Paramiko provides low-level SSH connectivity useful for CLI automation but requires manual command parsing, Netmiko builds on Paramiko adding convenience methods for CLI interaction with various device types, Requests library handles REST API calls to controllers and devices with REST interfaces, and NAPALM provides unified abstraction across vendors normalizing differences in CLI and API implementations. For comprehensive automation, combinations of libraries are often used where ncclient handles NETCONF interactions, Requests interfaces with DNA Center or other controllers, and Paramiko provides fallback for devices lacking programmable interfaces.

Question 116

An engineer is configuring EtherChannel on Cisco switches. Which protocol automatically negotiates EtherChannel formation and supports up to 16 physical links with 8 active?

A) PAgP (Port Aggregation Protocol)

B) LACP (Link Aggregation Control Protocol)

C) Static EtherChannel only

D) UDLD protocol

Answer: B)

Explanation:

LACP (Link Aggregation Control Protocol) is the IEEE 802.3ad standard protocol for EtherChannel negotiation supporting up to 16 physical links per group with 8 active links and 8 standby links, providing standardized link aggregation interoperable across vendors. EtherChannel technology aggregates multiple physical links between switches into a single logical link providing increased bandwidth, redundancy, and load balancing while appearing as a single link to spanning tree preventing temporary loops during convergence. Three EtherChannel configuration methods exist with different characteristics. Static EtherChannel configures links manually without negotiation protocol, providing fastest convergence but lacking automatic misconfiguration detection that protocols provide. PAgP is Cisco proprietary protocol supporting only 8 physical links total, using desirable mode actively initiating negotiation and auto mode passively waiting for negotiation, requiring at least one end in desirable mode. 

LACP is standards-based supporting 16 physical links where 8 are active forwarding traffic and 8 are standby ready to activate if active links fail, using active mode to initiate negotiation and passive mode to respond to negotiation, requiring at least one end in active mode. LACP provides superior scalability and interoperability making it preferred for modern deployments especially when connecting to non-Cisco devices. LACP negotiation uses LACP Data Units (LACPDUs) exchanged between devices containing system priority, system MAC address, port priority, and port number used to determine which links become active when more than 8 are configured. The device with lower system priority (and lower MAC address as tiebreaker) determines which ports are active, selecting ports with lowest port priority values. This allows administrators to influence which links are preferred by adjusting priorities.

Configuration involves creating port-channel interface defining logical bundle, configuring member interfaces with channel-group number and LACP mode active or passive, and ensuring consistent configuration across member interfaces including speed, duplex, VLAN membership, and switchport mode. All member interfaces must have identical configuration or EtherChannel formation fails. Load balancing distributes traffic across active links using hash algorithms based on packet information such as source MAC, destination MAC, source IP, destination IP, or combinations thereof, selected with “port-channel load-balance” global command. Common load-balance methods include src-dst-ip for Layer 3 distribution, src-dst-mac for Layer 2 distribution, and src-dst-port for Layer 4 distribution. Verification uses “show etherchannel summary” displaying port-channel interfaces and member status, “show etherchannel load-balance” showing configured load-balance method, and “show lacp neighbor” displaying LACP-specific information including neighbor priority and port status.

Question 117

A network administrator needs to implement wireless security using the strongest encryption method. Which security protocol provides the most secure wireless encryption?

A) WEP (Wired Equivalent Privacy)

B) WPA (Wi-Fi Protected Access)

C) WPA2 with AES encryption

D) Open authentication with MAC filtering

Answer: C)

Explanation:

WPA2 with AES encryption provides the strongest wireless security using CCMP (Counter Mode with CBC-MAC Protocol) based on AES encryption algorithm, offering robust protection against eavesdropping and unauthorized access through strong cryptographic methods and secure authentication mechanisms. Wireless security evolution reflects growing understanding of vulnerabilities and cryptographic advances. WEP was the original 802.11 security using RC4 encryption with 40-bit or 104-bit keys, but serious flaws including weak initialization vectors and lack of message integrity checking allow attacks to crack keys within minutes making WEP completely inadequate for any security-conscious deployment. WPA was introduced as interim improvement using TKIP (Temporal Key Integrity Protocol) which wraps RC4 encryption with per-packet keying and message integrity checking, providing significantly better security than WEP while maintaining compatibility with existing hardware, but TKIP vulnerabilities and RC4 weakness made it temporary solution. WPA2 implements the full 802.11i security standard using CCMP with AES encryption providing enterprise-grade security through 128-bit AES encryption in counter mode for confidentiality, CBC-MAC for message integrity and authentication, per-packet keys preventing key reuse attacks, and replay protection using packet numbering. WPA2 supports two authentication modes. WPA2-Personal (WPA2-PSK) uses pre-shared key where all users share common passphrase, suitable for home and small business but problematic for enterprises because key distribution is difficult and compromised key affects all users. 

WPA2-Enterprise uses 802.1X authentication with RADIUS server providing individual user credentials, allowing per-user authentication and authorization, supporting various EAP methods like PEAP, EAP-TLS, and EAP-FAST, and enabling dynamic key generation per user session. WPA3 is the latest standard improving security with Simultaneous Authentication of Equals (SAE) replacing PSK’s four-way handshake providing protection against offline dictionary attacks, forward secrecy ensuring past sessions remain secure even if long-term keys are compromised, and protected management frames preventing deauthentication attacks. However, WPA2 with AES remains widely deployed and provides strong security when properly configured. 

Best practices include using WPA2-Enterprise with 802.1X for corporate environments, using strong passphrases of at least 20 characters for WPA2-Personal, disabling WPA1/TKIP compatibility mode ensuring only WPA2 clients connect, regularly rotating pre-shared keys if using PSK mode, and enabling protected management frames if supported. Additional security measures include disabling SSID broadcast reducing visibility, implementing MAC address filtering as additional access control layer though not primary security, using separate VLANs for wireless traffic isolating from wired network, and deploying wireless intrusion prevention systems detecting rogue access points and attacks.

Question 118

An engineer is troubleshooting BGP peering issues between autonomous systems. Which BGP state indicates a successfully established neighbor relationship?

A) Idle state

B) Active state

C) OpenSent state

D) Established state

Answer: D)

Explanation:

Established state indicates a fully functional BGP neighbor relationship where the TCP connection is active, BGP parameters have been successfully exchanged and validated, and routers are actively exchanging routing updates enabling BGP to forward traffic between autonomous systems. 

BGP (Border Gateway Protocol) is the Internet’s routing protocol responsible for exchanging routing information between autonomous systems using a path vector algorithm that prevents loops through AS-path attribute. BGP operates over TCP port 179 providing reliable transport, and neighbor relationships progress through multiple states during establishment. The BGP finite state machine defines six states in the peering process. Idle state is the initial state where BGP process is waiting to start, initiated by administrative action or restart timer expiration, with BGP listening for incoming connections or initiating outbound connections to configured neighbors. 

Connect state occurs when BGP initiates TCP connection to the neighbor, waiting for TCP three-way handshake completion, and if successful transitions to OpenSent but if TCP fails returns to Active state. Active state indicates BGP is actively trying to establish TCP connection through repeated connection attempts, often seen when connectivity issues prevent TCP establishment or when the neighbor is unreachable. OpenSent state occurs after successful TCP connection when BGP sends OPEN message containing BGP version, AS number, router ID, and optional parameters like authentication, then waits for OPEN message from neighbor. OpenConfirm state is reached when both routers have exchanged OPEN messages and parameters are validated, with routers exchanging KEEPALIVE messages to confirm relationship before final transition. Established state represents successful peering where routers exchange full BGP tables initially, then send incremental updates when routing changes occur, and periodic KEEPALIVE messages maintain the relationship. Routers remain in Established state during normal operation, reverting to earlier states only when failures occur. Common issues preventing Established state include TCP connectivity problems from ACLs blocking port 179, firewall rules, or routing issues preventing reachability, misconfigured neighbor addresses or AS numbers causing parameter mismatch, authentication failures when passwords don’t match, router ID conflicts when routers have identical IDs, and EBGP multihop issues when neighbors aren’t directly connected without ebgp-multihop configuration. 

Troubleshooting uses “show ip bgp summary” displaying neighbors, their AS numbers, current state, and prefixes received, “show ip bgp neighbors” providing detailed information including state, sent and received messages, capabilities, and timer values, and “debug ip bgp” showing real-time BGP events though debug should be used cautiously in production. Best practices include configuring authentication preventing unauthorized peering, using loopback interfaces for IBGP peering providing interface independence, configuring update-source for EBGP multihop scenarios, and implementing prefix limits protecting against accidental or malicious route injection.

Question 119

A network administrator needs to implement network segmentation for security. Which technology provides dynamic VLAN assignment based on user authentication?

A) Static VLAN assignment only

B)1X with dynamic VLAN assignment

C) Port-based VLAN configuration

D) Management VLAN separation

Answer: B)

Explanation:

802.1X with dynamic VLAN assignment provides network access control that assigns users to appropriate VLANs automatically based on their authentication credentials and authorization attributes returned from RADIUS server, enabling flexible security policies where VLAN membership is determined by user identity rather than physical port location. 802.1X is an IEEE standard for port-based network access control implementing authentication framework where supplicant (client device) seeks network access, authenticator (network switch) controls port access, and authentication server (typically RADIUS) validates credentials and provides authorization. 

The dynamic VLAN assignment process begins when a client connects to a switch port configured for 802.1X, the port remains in unauthorized state blocking all traffic except EAP authentication frames, the supplicant and authentication server perform EAP authentication exchange with the switch acting as intermediary encapsulating EAP in RADIUS, and upon successful authentication the RADIUS server sends Access-Accept message including VLAN assignment attributes. The switch receives the RADIUS attributes containing VLAN ID and possibly other attributes like ACL names or QoS settings, dynamically assigns the port to the specified VLAN overriding any statically configured VLAN, and transitions the port to authorized state allowing traffic to flow. This automation provides significant benefits including role-based access where employees, contractors, and guests are automatically placed in appropriate VLANs based on credentials regardless of where they physically connect, simplified management eliminating manual VLAN configuration on switch ports, improved security through centralized policy enforcement consistent across all access points, and flexibility allowing users to roam between locations maintaining their VLAN assignment. RADIUS server configuration defines user groups or attributes correlated with VLAN IDs, typically using attributes like “Tunnel-Type” set to VLAN, “Tunnel-Medium-Type” set to 802, and “Tunnel-Private-Group-ID” containing VLAN ID or VLAN name. Identity Services Engine (ISE) or other policy servers make this configuration administrative-friendly through policy-based interfaces. 

Switch configuration involves globally enabling AAA and 802.1X, configuring RADIUS server parameters including IP address and shared secret, enabling 802.1X on access ports with authentication port-control auto, and optionally configuring guest VLAN for unauthenticated devices and restricted VLAN for authentication failures. Multi-domain authentication supports both data and voice VLANs on single port allowing IP phones and connected PCs separate VLAN assignments. Additional features include MAC authentication bypass for devices not supporting 802.1X supplicants, web authentication providing browser-based authentication for guest access, and flexible authentication sequencing trying multiple methods in configured order.

Question 120

An engineer is implementing DMVPN for hub-and-spoke VPN connectivity. Which protocol dynamically learns and maintains spoke-to-spoke tunnel mappings?

A) Static tunnel configuration only

B) NHRP (Next Hop Resolution Protocol)

C) OSPF routing protocol

D) GRE tunneling alone

Answer: B)

Explanation:

NHRP (Next Hop Resolution Protocol) dynamically learns and maintains mappings between spoke tunnel IP addresses and their physical NBMA (Non-Broadcast Multiple Access) addresses enabling direct spoke-to-spoke VPN tunnels without requiring traffic to hairpin through the hub, providing optimal routing for inter-site communication. DMVPN (Dynamic Multipoint VPN) is Cisco’s solution for scalable VPN deployments combining multiple technologies to create dynamic mesh VPN networks over IP infrastructure. 

The architecture uses hub-and-spoke topology where hub router serves as central aggregation point and multiple spoke routers connect to the hub, but DMVPN enables dynamic spoke-to-spoke tunnels that form on-demand when traffic requires direct communication between spokes. This architecture provides scalability advantages over traditional full-mesh VPN configurations that require N*(N-1)/2 tunnel configurations for N sites, becoming unmanageable for large deployments. DMVPN uses several component technologies working together. Multipoint GRE (mGRE) tunnels allow single tunnel interface to communicate with multiple destinations rather than requiring separate point-to-point GRE tunnels per remote site, dramatically reducing configuration complexity. IPsec encryption provides security for data traversing the tunnels with pre-shared keys or certificates for authentication. Dynamic routing protocols like EIGRP, OSPF, or BGP run over tunnel interfaces providing routing intelligence. NHRP is the critical component that enables dynamic tunnel formation by maintaining database mapping tunnel IP addresses (overlay) to NBMA IP addresses (underlay). Hub routers are configured as NHRP Next Hop Servers (NHS) maintaining authoritative mapping information, while spoke routers are NHRP clients that register their tunnel-to-NBMA mappings with the NHS. 

When a spoke needs to communicate with another spoke, it queries the NHS requesting the NBMA address of the destination tunnel IP, the NHS responds with the mapping, and the source spoke establishes direct IPsec tunnel to the destination spoke bypassing the hub for optimal routing. DMVPN supports three phases with different characteristics. Phase 1 implements hub-and-spoke only where all traffic traverses the hub without spoke-to-spoke tunnels, suitable for small deployments where hub capacity is sufficient. Phase 2 enables spoke-to-spoke tunnels but requires spoke routers to install specific routes for each spoke network, causing routing table scalability issues with many spokes. 

Phase 3 resolves scalability concerns by using NHRP redirect and shortcut mechanisms where traffic initially goes through hub which sends NHRP redirect to the source spoke, the spoke then establishes direct tunnel to destination, and subsequent traffic uses the optimized path. Configuration involves creating mGRE tunnel interfaces on hub and spokes, configuring NHRP with hub as NHS and spokes as clients, implementing IPsec profiles for encryption, and running dynamic routing protocols over tunnels.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!