Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 181
A network engineer needs to configure HSRP for gateway redundancy with preemption enabled. Which HSRP state indicates a router is actively forwarding traffic for the virtual IP address?
A) Standby state
B) Active state
C) Listen state
D) Init state
Answer: B)
Explanation:
Active state indicates the HSRP router is currently forwarding packets for the virtual IP address and responding to ARP requests for the virtual MAC address, serving as the default gateway for hosts in the subnet. HSRP operates with multiple routers sharing a virtual IP address, but only one router actively forwards traffic at any time while others remain ready for failover. HSRP routers progress through several states during operation. Init state occurs during initialization when HSRP first starts and the router hasn’t received hello messages from other group members. Listen state happens when the router receives hello messages from other routers but isn’t the active or standby router, monitoring the group passively. Standby state designates the backup router that will become active if the current active router fails, maintaining hello message exchange and ready for immediate takeover. Active state represents the router currently performing gateway functions, owning the virtual IP address and virtual MAC address, forwarding packets, and sending periodic hello messages. State transitions depend on priority values where routers with higher priority become active, with default priority 100 and range 0-255. Preemption allows higher-priority routers to reclaim active role when they come online or recover from failures, configured with “standby preempt” command. Without preemption, the current active router retains its role even when higher-priority routers join. HSRP uses multicast address 224.0.0.2 for version 1 or 224.0.0.102 for version 2 to exchange hello messages with default 3-second hello timer and 10-second hold timer. The virtual MAC address follows format 0000.0c07.acXX for version 1 or 0000.0c9f.fXXX for version 2, where XX represents the HSRP group number. Configuration involves enabling HSRP on interfaces with “standby group-number ip virtual-ip” command, setting priority with “standby group-number priority value”, enabling preemption if desired, and optionally configuring authentication. Interface tracking decrements priority when tracked interfaces fail, triggering failover if priority drops below standby router’s priority. Verification uses “show standby” displaying HSRP state, priority, virtual IP, active and standby routers, and timer information.
Question 182
An administrator is implementing Cisco DNA Center assurance capabilities. Which component collects telemetry data from network devices for analysis and troubleshooting?
A) Command-line interface only
B) Network Data Platform with streaming telemetry
C) SNMP traps exclusively
D) Syslog messages only
Answer: B)
Explanation:
Network Data Platform with streaming telemetry collects real-time network data from devices using model-driven telemetry, providing DNA Center Assurance with rich datasets for analytics, anomaly detection, and proactive issue identification. Traditional monitoring relies on polling mechanisms like SNMP where management stations periodically query devices for status information, creating delays between events and detection while consuming bandwidth with repeated polls. Streaming telemetry reverses this model where network devices continuously push operational data to collectors at high frequencies, typically every few seconds or even sub-second intervals.
DNA Center Assurance leverages telemetry for comprehensive network visibility including device health monitoring tracking CPU, memory, interface statistics, and hardware status, application performance measuring response times, packet loss, and throughput, client connectivity analyzing association, authentication, and DHCP processes, and network issues identifying configuration problems, hardware failures, and capacity constraints. The telemetry architecture uses YANG data models defining the structure of operational data, with devices encoding telemetry using formats like JSON or GPB (Google Protocol Buffers) for efficient transmission. Telemetry subscriptions specify which data to collect and at what frequency, configured either on devices directly or dynamically from DNA Center. The Network Data Platform aggregates telemetry from all network elements, normalizes data into common schemas, stores time-series data for historical analysis, and provides APIs for assurance applications to query. DNA Center Assurance applications consume this data for multiple use cases including baselining to establish normal network behavior patterns, anomaly detection using machine learning to identify deviations from baseline indicating potential issues, correlation across multiple data sources to identify root causes rather than just symptoms, and predictive analytics forecasting future issues before they impact users. The platform supports multiple telemetry protocols including NETCONF with periodic subscriptions, gRPC for efficient streaming, and RESTCONF for RESTful data access. Device requirements include running software versions supporting model-driven telemetry and having sufficient resources for telemetry processing. Configuration involves enabling telemetry on devices and establishing connectivity to DNA Center collectors.
Question 183
A network engineer needs to configure switch stacking with consistent master election. Which parameter determines the stack master when priority values are equal?
A) Lowest IP address
B) MAC address (lowest wins)
C) Longest uptime
D) Random selection
Answer: B)
Explanation:
MAC address serves as the tiebreaker when priority values are equal during stack master election, with the lowest MAC address winning the election and becoming the stack master controlling the entire stack. Cisco StackWise technology creates a unified control plane across multiple physical switches operating as a single logical switch with distributed forwarding. Stack master election determines which physical switch provides the master control plane managing the stack configuration, handling management protocols, and maintaining the unified system image.
The election process follows a deterministic hierarchy using multiple factors. First, the current stack master always retains its role if still operational, providing stability and preventing unnecessary master changes. Second, if no current master exists or the master fails, priority values determine the new master with higher numerical priority values preferred, configurable from 1-15 with default value 1. Third, when multiple switches have identical highest priority, the switch that was previously the stack master in a prior boot cycle takes precedence. Fourth, if priority values match and no previous master exists, the switch with the longest uptime since last reload becomes master.
Finally, if all previous factors are equal, the switch with the lowest MAC address wins the election. This hierarchical approach ensures predictable master election while providing flexibility through priority configuration. Best practices recommend manually configuring priorities on intended master switches with higher values like 15, leaving other switches at default or lower priorities. This guarantees the desired switch becomes master during initial formation and after any failures. Configuration uses “switch priority value” command in global configuration mode, requiring switch reload to take effect.
Stack master provides critical functions including maintaining the running configuration with all stack member configurations consolidated into single file, running routing protocols and spanning tree for the entire stack, handling management access with IP address for entire stack, and coordinating software upgrades across members. If the stack master fails, a new master is elected automatically using the same hierarchy, causing brief control plane disruption of seconds while the new master assumes responsibility. Data plane continues forwarding during master election with minimal traffic loss. Verification uses “show switch” displaying stack members, their roles (master, member, standby), priorities, MAC addresses, and states, helping confirm intended master election results.
Question 184
An administrator is configuring FlexConnect mode on wireless access points. Which FlexConnect feature allows APs to continue switching client traffic locally when connectivity to the WLC is lost?
A) Central switching only
B) Local switching with VLAN mapping
C) Cloud-dependent forwarding
D) Controller-required mode
Answer: B)
Explanation:
Local switching with VLAN mapping enables FlexConnect APs to continue forwarding client wireless traffic directly to the local wired network even when connectivity to the Wireless LAN Controller is lost, providing branch office resilience and reducing WAN bandwidth consumption. FlexConnect, previously called H-REAP, is a Cisco wireless solution for branch deployments where APs connect across WAN to centralized WLCs. Traditional centralized architectures tunnel all client traffic back to the controller for processing and forwarding to the wired network, consuming WAN bandwidth and creating single points of failure.
FlexConnect addresses these challenges through hybrid operation supporting both central and local switching modes. In connected mode when WLC connectivity exists, FlexConnect APs can operate in central switching mode tunneling traffic to the controller similar to local APs, or local switching mode where wireless client traffic is bridged directly onto the local wired network without traversing the WAN. Local switching dramatically reduces WAN bandwidth requirements by keeping local traffic local. VLAN mapping configuration on FlexConnect APs associates wireless SSIDs with local wired VLANs, enabling proper traffic segmentation. When WLC connectivity is lost, FlexConnect enters standalone mode continuing to provide wireless connectivity with limitations. In standalone mode, local switching continues functioning allowing existing associated clients to communicate and new clients to connect using cached configuration.
However, certain features requiring controller interaction become unavailable including guest authentication requiring redirect to web portal, new WLAN configuration or changes, and centralized policy updates. FlexConnect groups enable efficient management where APs in the same location are grouped, allowing administrators to push common configurations like VLAN mappings, backup RADIUS servers, and local authentication credentials to all group members simultaneously. This is particularly valuable for large branch deployments with many APs per site. Local authentication capability stores RADIUS credentials locally on the AP, enabling client authentication during WLC outages by validating against cached user database. FlexConnect ACLs provide local policy enforcement without controller dependency. Configuration involves enabling FlexConnect mode on APs through WLC, creating VLAN mappings associating SSIDs with wired VLANs, optionally configuring local authentication with RADIUS server details and cached credentials, and creating FlexConnect groups for multi-AP sites. The solution is ideal for branch offices with local users and applications where local switching reduces latency and WAN dependency.
Question 185
A network engineer is implementing QoS with DSCP marking for voice traffic. Which DSCP value is recommended for voice bearer traffic according to RFC 4594?
A) CS3 (24)
B) AF41 (34)
C) EF (46)
D) CS0 (0)
Answer: C)
Explanation:
EF (Expedited Forwarding) with DSCP value 46 is the RFC 4594 recommended marking for voice bearer traffic (the actual voice RTP packets), providing consistent low-latency and low-jitter treatment across QoS-enabled networks essential for voice quality. DSCP (Differentiated Services Code Point) uses 6 bits in the IP header’s ToS field providing 64 possible values for traffic classification and prioritization. RFC 4594 standardizes service classes and their recommended DSCP markings ensuring interoperability across vendor equipment. Voice traffic has stringent requirements where one-way latency should remain below 150ms, jitter below 30ms, and packet loss under 1% to maintain acceptable call quality. EF PHB (Per-Hop Behavior) guarantees these requirements through strict priority queuing and dedicated bandwidth allocation.
Voice deployments actually use two markings: EF (46) for voice bearer traffic carrying actual audio encoded as RTP packets, and CS3 (24) for voice signaling traffic carrying call setup, teardown, and control using protocols like SIP, H.323, or SCCP. Separating bearer and signaling ensures control plane stability even during network congestion. AF (Assured Forwarding) classes provide different service levels with drop precedence, commonly used for business-critical data where AF41 (34) represents high-priority data with low drop probability. CS (Class Selector) values maintain backward compatibility with IP Precedence providing eight priority levels.
Best practice QoS implementations follow standardized marking strategies for consistency. Voice bearer receives EF providing absolute priority, video conferencing receives AF41 for bandwidth assurance with moderate loss tolerance, business-critical applications receive AF31 or AF21 based on importance, and bulk data receives AF11 or best effort (DSCP 0) for lowest priority. The marking strategy must be consistent network-wide because devices make forwarding decisions based on DSCP values, requiring enterprise-wide standards. Classification and marking typically occur at network edges where trust boundaries are established, with IP phones being trusted to mark their own traffic correctly using EF for voice and CS3 for signaling. Switches and routers trust these markings and queue appropriately. For non-QoS-aware applications, network infrastructure performs classification using ACLs matching traffic patterns and applying appropriate markings. Verification involves examining DSCP values in packet captures and confirming queue statistics showing voice traffic in priority queues with minimal drops.
Question 186
An administrator needs to configure SPAN for traffic monitoring. Which SPAN configuration limitation must be considered when designing the monitoring setup?
A) SPAN can monitor multiple VLANs simultaneously
B) SPAN destination port cannot transmit traffic
C) SPAN works across switch stacks without restrictions
D) SPAN adds no performance overhead
Answer: B)
Explanation:
SPAN destination port cannot transmit traffic is a fundamental limitation where ports configured as SPAN destinations become monitoring-only receive ports, unable to send traffic or participate in normal switching operations. SPAN (Switched Port Analyzer) copies traffic from source ports, VLANs, or ACL matches to destination port for analysis, enabling network troubleshooting, security monitoring, and application performance analysis. Understanding SPAN types and limitations is critical for proper deployment. Local SPAN operates within a single switch copying traffic from sources to destinations on the same switch. Remote SPAN (RSPAN) extends monitoring across switches by transporting copied traffic over a special RSPAN VLAN to remote destinations. Encapsulated RSPAN (ERSPAN) encapsulates copied packets in GRE allowing monitoring traffic to traverse Layer 3 routed networks. SPAN configuration involves defining session numbers (typically 1-66), specifying source ports or VLANs to monitor, and designating destination port for copied traffic. Sources can include physical ports, port channels, or VLANs. When monitoring ports, administrators choose ingress (received traffic), egress (transmitted traffic), or both directions. VLAN SPAN monitors all ports in specified VLANs. Multiple design considerations affect SPAN effectiveness.
Destination port oversubscription occurs when monitored traffic exceeds destination port bandwidth, causing dropped frames that create incomplete captures. For example, monitoring four 1Gbps ports to a single 1Gbps destination port risks oversubscription during high traffic. The destination port cannot be a source port simultaneously, cannot belong to EtherChannel bundles, and doesn’t participate in spanning tree, VLANs, or other switch protocols. Original packet headers including VLANs and CoS markings are typically preserved in SPAN copies for accurate analysis. SPAN sessions consume switch resources including dedicated buffers and processing, potentially impacting switch performance at scale. Specific platforms limit the number of simultaneous SPAN sessions supported. RSPAN requires dedicated VLAN for transporting monitoring traffic, consuming VLAN resources and requiring trunk configuration on intermediate switches. Best practices include monitoring only necessary traffic to minimize bandwidth and resource usage, using ingress monitoring when possible as it’s less resource-intensive than egress, considering ERSPAN for monitoring across Layer 3 boundaries, and being aware of platform-specific limitations documented in configuration guides. Verification uses “show monitor session” displaying configured sessions, sources, destinations, and traffic statistics.
Question 187
A network engineer is implementing EIGRP routing. Which EIGRP metric component has the greatest impact on path selection when using default metric calculation?
A) Reliability
B) Load
C) Bandwidth and delay
D) MTU
Answer: C)
Explanation:
Bandwidth and delay are the EIGRP metric components that have the greatest impact on path selection when using default K-values, with bandwidth representing the minimum bandwidth along the path and delay representing cumulative latency. EIGRP uses a composite metric calculated from five components: bandwidth, delay, reliability, load, and MTU. However, the metric formula uses K-values as weights determining which components influence the calculation. Default K-values are K1=1 (bandwidth), K2=0 (load), K3=1 (delay), K4=0 (reliability), and K5=0 (MTU). With these defaults, only bandwidth and delay affect metric calculation, while load, reliability, and MTU are ignored. The bandwidth component identifies the slowest link (minimum bandwidth) in the path, scaled by dividing a constant (10^7 for classic EIGRP or 10^7 for modern IOS) by the bandwidth value in Kbps. This means lower bandwidth results in higher metric values, with slower links penalized. Delay represents the cumulative latency summing delay values of all interfaces along the path, measured in tens of microseconds. Interfaces have default delay values based on interface type, for example, Gigabit Ethernet has 10 microseconds delay while serial interfaces have higher values depending on bandwidth configuration. Path selection uses the calculated metric where lower metric values indicate preferred paths.
When multiple paths to a destination exist, EIGRP installs the lowest-metric path in the routing table as the successor route. Feasible successors are backup paths with metrics satisfying feasibility condition (advertised distance less than successor’s feasible distance), installed in the topology table and immediately usable if the successor fails. The metric calculation is critical for proper routing because mismatched K-values between neighbors prevent adjacency formation, incorrect bandwidth or delay values lead to suboptimal path selection, and manual metric manipulation affects routing decisions.
Bandwidth on serial interfaces deserves particular attention because Cisco routers default to 1544 Kbps regardless of actual circuit speed, requiring manual configuration with “bandwidth” command to reflect reality. This bandwidth value is used for EIGRP metric calculation and QoS, but doesn’t limit actual transmission speed. Delay can be manipulated with “delay” command to influence path selection without affecting real latency. Understanding bandwidth and delay’s role enables administrators to engineer traffic flows by adjusting these values to make preferred paths appear more attractive to EIGRP.
Question 188
An administrator is configuring port security with sticky MAC addresses. Which statement correctly describes sticky MAC address behavior?
A) Sticky addresses are lost after reload
B) Sticky addresses are dynamically learned and added to running config
C) Sticky addresses require manual configuration only
D) Sticky addresses cannot be saved
Answer: B)
Explanation:
Sticky MAC addresses are dynamically learned from traffic and automatically added to the running configuration, providing persistent port security without requiring administrators to manually configure each MAC address while offering the option to save learned addresses for retention across reboots. Port security limits which MAC addresses can send frames through switchports, preventing unauthorized device connections and mitigating MAC flooding attacks. Three MAC address learning methods exist with different operational characteristics. Dynamic learning allows the switch to learn MAC addresses from traffic up to the configured maximum, but learned addresses are stored only in dynamic memory and lost when the interface goes down or the switch reloads, requiring relearning after reboots. Static configuration requires administrators to manually specify allowed MAC addresses using “switchport port-security mac-address MAC” command, providing maximum security and persistence but creating administrative overhead especially in environments with many devices or frequent changes. Sticky learning combines the benefits of both approaches by dynamically learning MAC addresses from traffic but automatically adding them as “sticky” entries to the running configuration using “switchport port-security mac-address sticky MAC” format. This automation eliminates manual configuration while providing persistence when administrators save the running configuration to startup configuration with “copy running-config startup-config” command.
After saving, the sticky MAC addresses survive switch reloads and interface restarts, functioning like statically configured addresses. If running configuration isn’t saved, sticky addresses behave like dynamic addresses and are lost after reload. The sticky feature enables flexible security policies where initial deployment allows learning of legitimate device MAC addresses without manual effort, administrators verify learned addresses are correct, and running configuration is saved to preserve the security policy permanently. If devices change, administrators can clear specific sticky entries with “clear port-security sticky interface” command allowing new MAC addresses to be learned, or disable and re-enable port security to start fresh.
Combined with other port security features, sticky MAC addresses provide comprehensive access control. Configuration involves enabling port security with “switchport port-security”, enabling sticky learning with “switchport port-security mac-address sticky”, setting maximum MAC addresses with “switchport port-security maximum”, and configuring violation action. Verification uses “show port-security interface” displaying security status and violation counts, and “show running-config interface” showing sticky MAC addresses in the configuration.
Question 189
A network engineer is implementing VXLAN for data center network virtualization. Which VXLAN component performs the encapsulation and decapsulation of tenant traffic?
A) VXLAN Gateway only
B) VTEP (VXLAN Tunnel Endpoint)
C) Underlay router
D) Spine switch exclusively
Answer: B)
Explanation:
VTEP (VXLAN Tunnel Endpoint) is the component that performs encapsulation and decapsulation of tenant traffic, wrapping original Ethernet frames in UDP/IP headers for transport across IP networks and removing the encapsulation at the destination. VXLAN (Virtual Extensible LAN) extends Layer 2 networks across Layer 3 infrastructure, addressing VLAN scalability limitations and enabling network virtualization in data centers. Traditional VLANs use 12-bit identifier supporting only 4094 VLANs, insufficient for multi-tenant cloud environments with thousands of customers. VXLAN uses 24-bit VXLAN Network Identifier (VNI) providing 16 million logical networks.
The architecture uses overlay and underlay concepts where underlay is the physical IP network providing connectivity between devices using standard routing, and overlay is the logical VXLAN network carrying tenant traffic isolated per VNI. VTEPs sit at the edge between overlay and underlay networks, typically implemented in switches, hypervisors, or network appliances. When a host sends a frame destined for another host in the same VXLAN segment, the source VTEP receives the frame, encapsulates it by adding VXLAN header including VNI, UDP header using destination port 4789, IP header with source VTEP and destination VTEP addresses, and outer Ethernet header for next-hop forwarding. The underlay network forwards this encapsulated packet using standard IP routing to the destination VTEP. The destination VTEP decapsulates the packet by removing outer headers, examining the VNI to determine the destination segment, and forwarding the original Ethernet frame to the destination host. This process is transparent to hosts which believe they’re on a traditional Layer 2 network.
VTEPs maintain MAC address tables mapping destination MAC addresses to remote VTEP IP addresses, learned through control plane protocols like BGP EVPN or data plane learning from source MAC addresses in received encapsulated frames. Multicast or unicast replication handles broadcast, unknown unicast, and multicast (BUM) traffic in the VXLAN segment. VXLAN benefits include Layer 2 extension across Layer 3 routed networks enabling workload mobility, scalability supporting millions of logical networks, and flexibility with overlay independent from underlay. VXLAN is foundational for Cisco ACI and other data center fabrics. Configuration complexity depends on implementation with some platforms automating VTEP configuration through centralized controllers while others require manual VTEP and VNI configuration.
Question 190
An administrator needs to configure IPv6 addressing with duplicate address detection. How many neighbor solicitation messages are sent during the DAD process by default?
A) 0 messages
B) 1 message
C) 3 messages
D) 5 messages
Answer: B)
Explanation:
One neighbor solicitation message is sent by default during the Duplicate Address Detection (DAD) process, where a node checks if an IPv6 address is already in use on the link before assigning it to an interface. DAD is mandatory for IPv6 unicast addresses except for anycast addresses and loopback addresses, ensuring address uniqueness on the local link. The process occurs before an address transitions from tentative to valid state. When a node configures an IPv6 address through stateless autoconfiguration, DHCPv6, or manual configuration, the address initially enters tentative state where the interface cannot send or receive regular traffic using that address but can send and receive DAD messages.
The node performs DAD by sending a neighbor solicitation message to the solicited-node multicast address derived from the tentative address, with the target address field set to the tentative address and source address set to the unspecified address (::). If another node already uses that address, it responds with a neighbor advertisement indicating duplicate address detection. If no response is received within a timeout period (typically 1 second), the address is considered unique and transitions to valid state allowing normal use. The number of DAD attempts is configurable but defaults to 1, controlled by the DupAddrDetectTransmits parameter in router advertisements or by manual configuration on routers. Some implementations allow administrators to increase this value for more thorough checking in environments where packet loss might cause legitimate responses to be missed, or decrease to 0 disabling DAD entirely though this violates IPv6 specifications and risks address conflicts.
DAD optimization occurs with Optimistic DAD (RFC 4429) where interfaces can send traffic using tentative addresses during the DAD process, improving convergence time while still performing collision detection. DAD applies to multiple address types including global unicast addresses, unique local addresses, and link-local addresses. Link-local addresses are particularly important because they’re automatically generated and mandatory, making DAD essential for preventing conflicts in self-configured environments. DAD failures result in the interface not using the conflicting address, logging an error, and potentially trying different addresses or requiring administrative intervention. Verification of DAD process uses packet captures showing neighbor solicitation messages or checking IPv6 address states with commands like “show ipv6 interface” which displays addresses and their states (tentative, preferred, deprecated).
Question 191
A network engineer is implementing REST APIs for network automation. Which HTTP method is used to retrieve information from a REST API without modifying resources?
A) POST method
B) GET method
C) PUT method
D) DELETE method
Answer: B)
Explanation:
GET method retrieves information from REST APIs without modifying any resources on the server, following REST principles where GET requests are safe and idempotent, meaning multiple identical requests produce the same result without side effects. REST (Representational State Transfer) is an architectural style for designing networked applications using HTTP as the transport protocol and standard HTTP methods for operations. REST APIs expose resources represented by URLs, with clients performing operations using HTTP methods. Understanding HTTP method semantics is fundamental for API consumption and development. GET retrieves resource representations, used for reading data without making changes, for example, retrieving configuration, querying device status, or listing objects. GET requests should be idempotent where repeated requests return the same data without altering server state.
POST creates new resources or triggers actions, sending data in the request body to the server which processes the data and typically returns the created resource’s location. POST is not idempotent because multiple identical POST requests typically create multiple resources. PUT updates existing resources by replacing them entirely with the representation provided in the request body. PUT is idempotent where sending the same update multiple times results in the resource being in the same final state. PATCH partially updates resources by modifying only specified fields rather than replacing the entire resource, useful for large resources when only a few attributes need changing. DELETE removes resources from the server, and while the operation itself changes state, DELETE is considered idempotent because deleting the same resource multiple times results in the same end state (resource deleted).
REST API design uses these methods consistently where GET for read operations returns 200 OK with resource data, POST for creation returns 201 Created with location of new resource, PUT for update returns 200 OK or 204 No Content, and DELETE returns 200 OK or 204 No Content. Status codes communicate operation results with 2xx indicating success, 4xx indicating client errors like authentication failure or invalid requests, and 5xx indicating server errors. Network automation commonly uses GET to retrieve operational state, configuration, or inventory information. Cisco DNA Center, Meraki Dashboard, and other platforms provide REST APIs where GET methods query device information, configuration templates, or network health data. Proper API usage includes authentication with headers or tokens, handling pagination for large datasets, and implementing error handling for failed requests.
Question 192
An administrator is configuring TACACS+ for device administration. Which TACACS+ characteristic provides advantage over RADIUS for network device administration?
A) Uses UDP transport
B) Encrypts only password
C) Encrypts entire payload
D) No command authorization support
Answer: C)
Explanation:
Encrypts entire payload is a key TACACS+ advantage where the complete packet body is encrypted providing confidentiality for all authentication data, authorization requests, and accounting information, unlike RADIUS which only encrypts passwords. TACACS+ (Terminal Access Controller Access-Control System Plus) is a Cisco protocol for AAA services providing authentication, authorization, and accounting for network device access. Understanding differences between TACACS+ and RADIUS helps in protocol selection for specific use cases. TACACS+ uses TCP port 49 providing reliable delivery with connection-oriented communication, while RADIUS uses UDP requiring application-level retransmission logic. The TCP transport ensures AAA messages are delivered reliably especially important for authorization and accounting. TACACS+ separates authentication, authorization, and accounting into distinct processes allowing flexible implementation where services can use different servers or policies independently. This separation enables granular control where authentication might succeed but authorization restricts specific commands based on user roles.
Command authorization is TACACS+’s most powerful feature for network device administration, allowing per-command authorization where each command entered by administrators can be individually authorized by the TACACS+ server based on user and command attributes.
This enables role-based administration where junior administrators might be authorized only for show commands, mid-level administrators can modify configurations within their scope, and senior administrators have unrestricted access. RADIUS lacks this granular command authorization capability, providing only coarse-grained authorization typically limited to what privilege level users receive. TACACS+ encrypts the entire payload of packets after the header using MD5-based encryption, protecting usernames, authorization requests, and accounting data from eavesdropping. RADIUS only encrypts the password attribute while other attributes like usernames remain in plaintext, potentially exposing sensitive information. TACACS+ accounting is more flexible with accounting records sent for individual commands in addition to session start/stop, providing detailed audit trails of all administrative actions.
RADIUS accounting typically only tracks session information. These characteristics make TACACS+ preferred for network device administration requiring detailed auditing and granular authorization, while RADIUS remains popular for network access control scenarios like 802.1X and VPN authentication where its multivendor support and simplicity are valued. Configuration involves establishing TACACS+ server groups, configuring AAA to use TACACS+ for authentication, authorization, and accounting, and defining authorization rules on the TACACS+ server specifying command permissions per user or group.
Question 193
A network engineer is troubleshooting IPv6 connectivity issues. Which ICMPv6 message type is used by hosts to discover routers on the local link?
A) Echo Request
B) Router Solicitation
C) Neighbor Advertisement
D) Redirect Message
Answer: B)
Explanation:
Router Solicitation is the ICMPv6 message type sent by IPv6 hosts to discover routers on the local link, requesting that routers immediately send Router Advertisement messages rather than waiting for the next scheduled advertisement. ICMPv6 provides essential functions for IPv6 including error reporting, diagnostic functions, and neighbor discovery which replaces IPv4’s ARP. Router Discovery is a key Neighbor Discovery Protocol process enabling hosts to find routers automatically. The process operates through two message types. Router Advertisement messages are sent periodically by routers to the all-nodes multicast address (FF02::1) announcing their presence, providing network prefixes for address autoconfiguration, specifying hop limit and MTU values, and indicating whether DHCPv6 should be used. These unsolicited advertisements occur at regular intervals typically every 200 seconds. Router Solicitation messages are sent by hosts when they need immediate router information rather than waiting for the next scheduled advertisement, typically when an interface becomes active or a host boots.
Hosts send solicitations to the all-routers multicast address (FF02::2) prompting routers to respond immediately with Router Advertisements. This speeds up network connectivity establishment at boot or after link state changes.
The Router Advertisement contains critical information including one or more network prefixes that hosts use for stateless address autoconfiguration, flags indicating whether stateful (DHCPv6) or stateless autoconfiguration should be used, router lifetime indicating how long the router should be considered a default router, and reachable time and retransmit timer values for neighbor unreachability detection. Hosts process Router Advertisements to configure global unicast addresses, identify default gateways, and determine network parameters. Multiple routers can advertise on the same link providing redundancy with hosts typically selecting one as preferred default router based on router preference values in advertisements. Router Solicitations are also sent when hosts detect that their default router may be unreachable, triggering a search for alternative routers. The process is fundamental to IPv6’s plug-and-play nature enabling zero-configuration networking through SLAAC. Troubleshooting IPv6 connectivity often involves verifying Router Advertisement reception using packet captures or debugging commands. Missing Router Advertisements indicate router configuration issues or network connectivity problems preventing multicast delivery.
D) MTU
Answer: C)
Explanation:
Bandwidth and delay are the EIGRP metric components that have the greatest impact on path selection when using default K-values, with bandwidth representing the minimum bandwidth along the path and delay representing cumulative latency. EIGRP uses a composite metric calculated from five components: bandwidth, delay, reliability, load, and MTU. However, the metric formula uses K-values as weights determining which components influence the calculation. Default K-values are K1=1 (bandwidth), K2=0 (load), K3=1 (delay), K4=0 (reliability), and K5=0 (MTU). With these defaults, only bandwidth and delay affect metric calculation, while load, reliability, and MTU are ignored. The bandwidth component identifies the slowest link (minimum bandwidth) in the path, scaled by dividing a constant (10^7 for classic EIGRP or 10^7 for modern IOS) by the bandwidth value in Kbps. This means lower bandwidth results in higher metric values, with slower links penalized. Delay represents the cumulative latency summing delay values of all interfaces along the path, measured in tens of microseconds. Interfaces have default delay values based on interface type, for example, Gigabit Ethernet has 10 microseconds delay while serial interfaces have higher values depending on bandwidth configuration. Path selection uses the calculated metric where lower metric values indicate preferred paths.
When multiple paths to a destination exist, EIGRP installs the lowest-metric path in the routing table as the successor route. Feasible successors are backup paths with metrics satisfying feasibility condition (advertised distance less than successor’s feasible distance), installed in the topology table and immediately usable if the successor fails.
The metric calculation is critical for proper routing because mismatched K-values between neighbors prevent adjacency formation, incorrect bandwidth or delay values lead to suboptimal path selection, and manual metric manipulation affects routing decisions. Bandwidth on serial interfaces deserves particular attention because Cisco routers default to 1544 Kbps regardless of actual circuit speed, requiring manual configuration with “bandwidth” command to reflect reality. This bandwidth value is used for EIGRP metric calculation and QoS, but doesn’t limit actual transmission speed. Delay can be manipulated with “delay” command to influence path selection without affecting real latency. Understanding bandwidth and delay’s role enables administrators to engineer traffic flows by adjusting these values to make preferred paths appear more attractive to EIGRP.
Question 188
An administrator is configuring port security with sticky MAC addresses. Which statement correctly describes sticky MAC address behavior?
A) Sticky addresses are lost after reload
B) Sticky addresses are dynamically learned and added to running config
C) Sticky addresses require manual configuration only
D) Sticky addresses cannot be saved
Answer: B)
Explanation:
Sticky MAC addresses are dynamically learned from traffic and automatically added to the running configuration, providing persistent port security without requiring administrators to manually configure each MAC address while offering the option to save learned addresses for retention across reboots. Port security limits which MAC addresses can send frames through switchports, preventing unauthorized device connections and mitigating MAC flooding attacks. Three MAC address learning methods exist with different operational characteristics.
Dynamic learning allows the switch to learn MAC addresses from traffic up to the configured maximum, but learned addresses are stored only in dynamic memory and lost when the interface goes down or the switch reloads, requiring relearning after reboots. Static configuration requires administrators to manually specify allowed MAC addresses using “switchport port-security mac-address MAC” command, providing maximum security and persistence but creating administrative overhead especially in environments with many devices or frequent changes. Sticky learning combines the benefits of both approaches by dynamically learning MAC addresses from traffic but automatically adding them as “sticky” entries to the running configuration using “switchport port-security mac-address sticky MAC” format. This automation eliminates manual configuration while providing persistence when administrators save the running configuration to startup configuration with “copy running-config startup-config” command. After saving, the sticky MAC addresses survive switch reloads and interface restarts, functioning like statically configured addresses. If running configuration isn’t saved, sticky addresses behave like dynamic addresses and are lost after reload.
The sticky feature enables flexible security policies where initial deployment allows learning of legitimate device MAC addresses without manual effort, administrators verify learned addresses are correct, and running configuration is saved to preserve the security policy permanently. If devices change, administrators can clear specific sticky entries with “clear port-security sticky interface” command allowing new MAC addresses to be learned, or disable and re-enable port security to start fresh. Combined with other port security features, sticky MAC addresses provide comprehensive access control. Configuration involves enabling port security with “switchport port-security”, enabling sticky learning with “switchport port-security mac-address sticky”, setting maximum MAC addresses with “switchport port-security maximum”, and configuring violation action. Verification uses “show port-security interface” displaying security status and violation counts, and “show running-config interface” showing sticky MAC addresses in the configuration.
Question 189
A network engineer is implementing VXLAN for data center network virtualization. Which VXLAN component performs the encapsulation and decapsulation of tenant traffic?
A) VXLAN Gateway only
B) VTEP (VXLAN Tunnel Endpoint)
C) Underlay router
D) Spine switch exclusively
Answer: B)
Explanation:
VTEP (VXLAN Tunnel Endpoint) is the component that performs encapsulation and decapsulation of tenant traffic, wrapping original Ethernet frames in UDP/IP headers for transport across IP networks and removing the encapsulation at the destination. VXLAN (Virtual Extensible LAN) extends Layer 2 networks across Layer 3 infrastructure, addressing VLAN scalability limitations and enabling network virtualization in data centers. Traditional VLANs use 12-bit identifier supporting only 4094 VLANs, insufficient for multi-tenant cloud environments with thousands of customers. VXLAN uses 24-bit VXLAN Network Identifier (VNI) providing 16 million logical networks. The architecture uses overlay and underlay concepts where underlay is the physical IP network providing connectivity between devices using standard routing, and overlay is the logical VXLAN network carrying tenant traffic isolated per VNI. VTEPs sit at the edge between overlay and underlay networks, typically implemented in switches, hypervisors, or network appliances. When a host sends a frame destined for another host in the same VXLAN segment, the source VTEP receives the frame, encapsulates it by adding VXLAN header including VNI, UDP header using destination port 4789, IP header with source VTEP and destination VTEP addresses, and outer Ethernet header for next-hop forwarding.
The underlay network forwards this encapsulated packet using standard IP routing to the destination VTEP. The destination VTEP decapsulates the packet by removing outer headers, examining the VNI to determine the destination segment, and forwarding the original Ethernet frame to the destination host. This process is transparent to hosts which believe they’re on a traditional Layer 2 network. VTEPs maintain MAC address tables mapping destination MAC addresses to remote VTEP IP addresses, learned through control plane protocols like BGP EVPN or data plane learning from source MAC addresses in received encapsulated frames. Multicast or unicast replication handles broadcast, unknown unicast, and multicast (BUM) traffic in the VXLAN segment.
VXLAN benefits include Layer 2 extension across Layer 3 routed networks enabling workload mobility, scalability supporting millions of logical networks, and flexibility with overlay independent from underlay. VXLAN is foundational for Cisco ACI and other data center fabrics. Configuration complexity depends on implementation with some platforms automating VTEP configuration through centralized controllers while others require manual VTEP and VNI configuration.
Question 190
An administrator needs to configure IPv6 addressing with duplicate address detection. How many neighbor solicitation messages are sent during the DAD process by default?
A) 0 messages
B) 1 message
C) 3 messages
D) 5 messages
Answer: B)
Explanation:
One neighbor solicitation message is sent by default during the Duplicate Address Detection (DAD) process, where a node checks if an IPv6 address is already in use on the link before assigning it to an interface. DAD is mandatory for IPv6 unicast addresses except for anycast addresses and loopback addresses, ensuring address uniqueness on the local link.
The process occurs before an address transitions from tentative to valid state. When a node configures an IPv6 address through stateless autoconfiguration, DHCPv6, or manual configuration, the address initially enters tentative state where the interface cannot send or receive regular traffic using that address but can send and receive DAD messages. The node performs DAD by sending a neighbor solicitation message to the solicited-node multicast address derived from the tentative address, with the target address field set to the tentative address and source address set to the unspecified address (::). If another node already uses that address, it responds with a neighbor advertisement indicating duplicate address detection. If no response is received within a timeout period (typically 1 second), the address is considered unique and transitions to valid state allowing normal use. The number of DAD attempts is configurable but defaults to 1, controlled by the DupAddrDetectTransmits parameter in router advertisements or by manual configuration on routers. Some implementations allow administrators to increase this value for more thorough checking in environments where packet loss might cause legitimate responses to be missed, or decrease to 0 disabling DAD entirely though this violates IPv6 specifications and risks address conflicts.
DAD optimization occurs with Optimistic DAD (RFC 4429) where interfaces can send traffic using tentative addresses during the DAD process, improving convergence time while still performing collision detection. DAD applies to multiple address types including global unicast addresses, unique local addresses, and link-local addresses. Link-local addresses are particularly important because they’re automatically generated and mandatory, making DAD essential for preventing conflicts in self-configured environments.
DAD failures result in the interface not using the conflicting address, logging an error, and potentially trying different addresses or requiring administrative intervention. Verification of DAD process uses packet captures showing neighbor solicitation messages or checking IPv6 address states with commands like “show ipv6 interface” which displays addresses and their states (tentative, preferred, deprecated).
Question 191
A network engineer is implementing REST APIs for network automation. Which HTTP method is used to retrieve information from a REST API without modifying resources?
A) POST method
B) GET method
C) PUT method
D) DELETE method
Answer: B)
Explanation:
GET method retrieves information from REST APIs without modifying any resources on the server, following REST principles where GET requests are safe and idempotent, meaning multiple identical requests produce the same result without side effects. REST (Representational State Transfer) is an architectural style for designing networked applications using HTTP as the transport protocol and standard HTTP methods for operations. REST APIs expose resources represented by URLs, with clients performing operations using HTTP methods. Understanding HTTP method semantics is fundamental for API consumption and development. GET retrieves resource representations, used for reading data without making changes, for example, retrieving configuration, querying device status, or listing objects. GET requests should be idempotent where repeated requests return the same data without altering server state. POST creates new resources or triggers actions, sending data in the request body to the server which processes the data and typically returns the created resource’s location. POST is not idempotent because multiple identical POST requests typically create multiple resources.
PUT updates existing resources by replacing them entirely with the representation provided in the request body. PUT is idempotent where sending the same update multiple times results in the resource being in the same final state. PATCH partially updates resources by modifying only specified fields rather than replacing the entire resource, useful for large resources when only a few attributes need changing. DELETE removes resources from the server, and while the operation itself changes state, DELETE is considered idempotent because deleting the same resource multiple times results in the same end state (resource deleted).
REST API design uses these methods consistently where GET for read operations returns 200 OK with resource data, POST for creation returns 201 Created with location of new resource, PUT for update returns 200 OK or 204 No Content, and DELETE returns 200 OK or 204 No Content. Status codes communicate operation results with 2xx indicating success, 4xx indicating client errors like authentication failure or invalid requests, and 5xx indicating server errors. Network automation commonly uses GET to retrieve operational state, configuration, or inventory information. Cisco DNA Center, Meraki Dashboard, and other platforms provide REST APIs where GET methods query device information, configuration templates, or network health data. Proper API usage includes authentication with headers or tokens, handling pagination for large datasets, and implementing error handling for failed requests.
Question 192
An administrator is configuring TACACS+ for device administration. Which TACACS+ characteristic provides advantage over RADIUS for network device administration?
A) Uses UDP transport
B) Encrypts only password
C) Encrypts entire payload
D) No command authorization support
Answer: C)
Explanation:
Encrypts entire payload is a key TACACS+ advantage where the complete packet body is encrypted providing confidentiality for all authentication data, authorization requests, and accounting information, unlike RADIUS which only encrypts passwords. TACACS+ (Terminal Access Controller Access-Control System Plus) is a Cisco protocol for AAA services providing authentication, authorization, and accounting for network device access. Understanding differences between TACACS+ and RADIUS helps in protocol selection for specific use cases. TACACS+ uses TCP port 49 providing reliable delivery with connection-oriented communication, while RADIUS uses UDP requiring application-level retransmission logic.
The TCP transport ensures AAA messages are delivered reliably especially important for authorization and accounting. TACACS+ separates authentication, authorization, and accounting into distinct processes allowing flexible implementation where services can use different servers or policies independently. This separation enables granular control where authentication might succeed but authorization restricts specific commands based on user roles. Command authorization is TACACS+’s most powerful feature for network device administration, allowing per-command authorization where each command entered by administrators can be individually authorized by the TACACS+ server based on user and command attributes. This enables role-based administration where junior administrators might be authorized only for show commands, mid-level administrators can modify configurations within their scope, and senior administrators have unrestricted access. RADIUS lacks this granular command authorization capability, providing only coarse-grained authorization typically limited to what privilege level users receive. TACACS+ encrypts the entire payload of packets after the header using MD5-based encryption, protecting usernames, authorization requests, and accounting data from eavesdropping.
RADIUS only encrypts the password attribute while other attributes like usernames remain in plaintext, potentially exposing sensitive information. TACACS+ accounting is more flexible with accounting records sent for individual commands in addition to session start/stop, providing detailed audit trails of all administrative actions. RADIUS accounting typically only tracks session information. These characteristics make TACACS+ preferred for network device administration requiring detailed auditing and granular authorization, while RADIUS remains popular for network access control scenarios like 802.1X and VPN authentication where its multivendor support and simplicity are valued. Configuration involves establishing TACACS+ server groups, configuring AAA to use TACACS+ for authentication, authorization, and accounting, and defining authorization rules on the TACACS+ server specifying command permissions per user or group.
Question 193
A network engineer is troubleshooting IPv6 connectivity issues. Which ICMPv6 message type is used by hosts to discover routers on the local link?
A) Echo Request
B) Router Solicitation
C) Neighbor Advertisement
D) Redirect Message
Answer: B)
Explanation:
Router Solicitation is the ICMPv6 message type sent by IPv6 hosts to discover routers on the local link, requesting that routers immediately send Router Advertisement messages rather than waiting for the next scheduled advertisement. ICMPv6 provides essential functions for IPv6 including error reporting, diagnostic functions, and neighbor discovery which replaces IPv4’s ARP. Router Discovery is a key Neighbor Discovery Protocol process enabling hosts to find routers automatically.
The process operates through two message types. Router Advertisement messages are sent periodically by routers to the all-nodes multicast address (FF02::1) announcing their presence, providing network prefixes for address autoconfiguration, specifying hop limit and MTU values, and indicating whether DHCPv6 should be used. These unsolicited advertisements occur at regular intervals typically every 200 seconds. Router Solicitation messages are sent by hosts when they need immediate router information rather than waiting for the next scheduled advertisement, typically when an interface becomes active or a host boots. Hosts send solicitations to the all-routers multicast address (FF02::2) prompting routers to respond immediately with Router Advertisements. This speeds up network connectivity establishment at boot or after link state changes. The Router Advertisement contains critical information including one or more network prefixes that hosts use for stateless address autoconfiguration, flags indicating whether stateful (DHCPv6) or stateless autoconfiguration should be used, router lifetime indicating how long the router should be considered a default router, and reachable time and retransmit timer values for neighbor unreachability detection.
Hosts process Router Advertisements to configure global unicast addresses, identify default gateways, and determine network parameters. Multiple routers can advertise on the same link providing redundancy with hosts typically selecting one as preferred default router based on router preference values in advertisements. Router Solicitations are also sent when hosts detect that their default router may be unreachable, triggering a search for alternative routers. The process is fundamental to IPv6’s plug-and-play nature enabling zero-configuration networking through SLAAC. Troubleshooting IPv6 connectivity often involves verifying Router Advertisement reception using packet captures or debugging commands. Missing Router Advertisements indicate router configuration issues or network connectivity problems preventing multicast delivery.
Question 194
An administrator needs to configure NetFlow for traffic analysis. Which NetFlow component collects and aggregates flow data from multiple routers?
A) Flow exporter on router
B) NetFlow collector server
C) Flow cache only
D) SNMP manager
Answer: B)
Explanation:
NetFlow collector server receives, aggregates, stores, and analyzes flow data exported from multiple network devices, providing centralized traffic visibility and reporting essential for capacity planning, security analysis, and billing applications. NetFlow is Cisco’s technology for collecting IP traffic information, creating records of network flows defined as unidirectional sequences of packets sharing common attributes like source/destination addresses, ports, protocol, and ToS. The architecture consists of three primary components working together. Flow exporter runs on routers and switches, monitoring packets, identifying flows, maintaining flow cache with active flow records, and exporting completed flows to collectors when flows terminate or cache entries age out.
Exporters use NetFlow version 5, 9, or IPFIX (version 10) to send flow records via UDP to one or more collector IP addresses. Flow collector is typically a dedicated server or appliance running specialized software that receives exported flow records from multiple network devices, aggregates data from distributed sources, stores flow data for historical analysis, and provides query interfaces for reporting and analysis. Popular collectors include commercial products like SolarWinds NetFlow Traffic Analyzer, PRTG, and open-source solutions like nfdump, ntop, and ElastiFlow. Analysis applications run on or integrate with collectors, providing visualization through dashboards and charts, reporting on top talkers, applications, and conversations, anomaly detection identifying unusual traffic patterns, and security analysis detecting scanning, DDoS, and data exfiltration.
NetFlow records contain multiple fields including source and destination IP addresses and ports, IP protocol type, Type of Service byte for QoS information, input and output interface indices, byte and packet counts, timestamps for flow start and end, and TCP flags for connection state information. Version 9 and IPFIX use templates enabling flexible field definitions and custom attributes. Deployment involves enabling NetFlow on router interfaces with “ip flow ingress” and optionally “ip flow egress” for bidirectional visibility, configuring flow export destination with “ip flow-export destination collector-ip port” typically using port 2055 or 9996, setting NetFlow version with “ip flow-export version”, and optionally configuring sampling to reduce overhead on high-traffic interfaces. Sampling exports only a subset of flows, for example 1-in-100 packets, reducing router CPU and collector load while providing statistically representative traffic analysis.
Flow cache size and timeout values can be tuned where active timeout exports long-lived flows periodically even if still active (default 30 minutes), and inactive timeout exports flows after inactivity period (default 15 seconds). NetFlow provides valuable insights including application visibility identifying which applications consume bandwidth, user traffic patterns showing which users generate most traffic, security monitoring detecting anomalous behaviors like port scanning or data exfiltration, capacity planning identifying trending bandwidth growth and peak utilization periods, and billing support for service providers charging based on usage. Flexible NetFlow enhances traditional NetFlow with user-defined keys and fields enabling custom flow definitions. Verification uses “show ip flow export” displaying export configuration and statistics, “show ip cache flow” showing active flows in cache, and collector interfaces showing received flow records and providing analysis tools.
Question 195
A network engineer is implementing policy-based routing (PBR). Which PBR action redirects traffic to a specified next-hop IP address?
A) Match traffic only
B) Set ip next-hop
C) Filter packets exclusively
D) Drop traffic silently
Answer: B)
Explanation:
Set ip next-hop is the PBR action that redirects matching traffic to a specified next-hop IP address, overriding the routing table and enabling routing decisions based on criteria beyond destination address alone. Policy-based routing provides granular control over packet forwarding by making routing decisions based on extended criteria including source address, destination address, protocol type, port numbers, packet size, or application signatures rather than relying solely on destination-based routing table lookups. PBR enables traffic engineering, load distribution, and service differentiation impossible with traditional routing. The configuration uses route maps defining match criteria and set actions. Match statements identify traffic for policy routing using access lists matching source/destination addresses and protocols, extended ACLs including port information, or packet length ranges.
Set statements specify actions for matched traffic including “set ip next-hop” directing traffic to specific next-hop IP addresses which must be reachable through directly connected networks, “set ip default next-hop” using policy-based routing only when the routing table contains no explicit route, “set interface” forwarding traffic out specific interfaces, “set ip precedence” or “set ip dscp” modifying QoS markings, and “set ip df” manipulating the Don’t Fragment bit. Multiple set commands can be combined in a single route map sequence. Common PBR use cases include source-based routing where traffic from different source networks uses different ISP connections for load distribution or policy requirements, application-based routing where specific applications like VoIP use premium links while bulk data uses cost-effective connections, and selective path manipulation for traffic engineering or bypassing congested links. PBR is configured on ingress interfaces using “ip policy route-map” command, evaluating packets received on that interface against the route map before routing table lookup. Local policy routing applies PBR to locally generated traffic from the router itself using “ip local policy route-map” in global configuration.
Important considerations include performance impact because PBR disables fast switching requiring process switching for policy-routed packets on some platforms, though modern hardware platforms support PBR in hardware with minimal performance impact. Route map evaluation follows sequence numbers in ascending order with first match determining the action, and implicit deny at the end means unmatched traffic follows normal routing. Verification uses “show route-map” displaying route map configuration and match/set statistics, and “debug ip policy” showing real-time policy routing decisions though debug should be used cautiously in production.
Question 196
An administrator is configuring AAA authorization for network devices. Which authorization type controls which commands users can execute based on privilege level?
A) EXEC authorization only
B) Commands authorization
C) Network authorization
D) Service authorization
Answer: B)
Explanation:
Commands authorization controls which specific commands users can execute at various privilege levels, providing granular command-level access control essential for role-based network device administration. AAA authorization determines what authenticated users are permitted to do after successful login. Multiple authorization types serve different purposes. EXEC authorization controls whether users can access EXEC shell at all and which privilege level they receive, determining the initial command mode (user EXEC versus privileged EXEC) and associated privilege level (0-15). Commands authorization provides the most granular control by authorizing individual commands at specific privilege levels, allowing administrators to create custom command sets where users can execute only approved commands based on their roles. For example, Level 1 might allow only show commands, Level 5 adds configuration viewing, Level 10 adds limited configuration changes, and Level 15 provides full access. Network authorization applies to network services like PPP and SLIP typically used for dial-up connections.
Configuration authorization controls whether running-config downloads from AAA servers are permitted, used with AutoInstall and similar automated provisioning. Commands authorization requires defining authorization on both the network device and AAA server. On devices, configuration uses “aaa authorization commands level method-list” where level specifies privilege level (0-15) and method-list defines authorization sources like TACACS+ servers or local database. The most common configuration is “aaa authorization commands 15 default group tacacs+ local” authorizing privilege level 15 commands using TACACS+ with local fallback. AAA servers like ISE or ACS define authorization policies specifying which command patterns are permitted or denied per user or group. TACACS+ excels at command authorization because its protocol design separates authorization from authentication and provides detailed per-command authorization capabilities. RADIUS lacks native command authorization support making it less suitable for device administration though some implementations add proprietary extensions. Command authorization policies use regular expressions matching command strings allowing flexible rules like permitting all show commands while blocking configuration commands, allowing specific configuration commands like VLAN creation but denying routing changes, or permitting full access to specific device groups while restricting access elsewhere. Authorization enforcement occurs in real-time where each command entered is sent to the AAA server for authorization before execution, receiving either permit or deny response.
This provides immediate policy enforcement and detailed audit trails of attempted commands. Best practices include defining role-based authorization matching organizational structure, testing authorization rules thoroughly before production deployment, implementing fallback to local authorization for emergency access during AAA server failures, and maintaining detailed logging of authorization decisions for security auditing.
Question 197
A network engineer is implementing OSPF multi-area design. Which LSA type is generated by the ABR to advertise routes between OSPF areas?
A) Type 1 Router LSA
B) Type 2 Network LSA
C) Type 3 Summary LSA
D) Type 5 External LSA
Answer: C)
Explanation:
Type 3 Summary LSA is generated by Area Border Routers to advertise inter-area routes, summarizing networks from one area and advertising them into adjacent areas enabling reachability between OSPF areas while maintaining area boundaries. OSPF uses different LSA types for various routing information purposes, understanding each type is essential for troubleshooting and design. Type 1 Router LSA is generated by every OSPF router describing its directly connected links and neighboring routers, flooded only within the originating area providing intra-area topology information. Type 2 Network LSA is generated by Designated Routers on broadcast and NBMA networks listing all routers on the multi-access segment, also flooded only within the area. Type 3 Summary LSA is generated exclusively by ABRs advertising networks from one area into other areas, providing inter-area reachability without flooding complete topology information. ABRs receive Type 1 and Type 2 LSAs from internal routers within an area, calculate best paths to destinations, and generate Type 3 LSAs advertising those destinations into other areas. This summarization maintains area separation where routers only have detailed topology of their own area but can reach destinations in other areas through ABR-generated summaries. Type 4 ASBR Summary LSA is generated by ABRs advertising the location of ASBRs (routers performing redistribution) enabling routers to find the path to external route sources.
Type 5 External LSA is generated by ASBRs describing routes redistributed from other protocols or static routes, flooded throughout the OSPF domain except stub areas. Type 7 NSSA External LSA is generated by ASBRs within NSSA areas for external routes, converted to Type 5 by the NSSA ABR when leaving the NSSA area. Type 3 LSA design considerations include route summarization where ABRs can be configured to summarize multiple Type 3 LSAs into single summary advertisements reducing routing table size and LSA flooding overhead using “area X range” command. This manual summarization is critical for scalable OSPF designs. Type 3 LSAs are regenerated by each ABR along the path, so a route learned in Area 1 generates Type 3 LSA into backbone Area 0, which then generates another Type 3 LSA into Area 2, maintaining loop-free routing through OSPF’s rules prohibiting inter-area routes from traversing multiple non-backbone areas.
Understanding LSA types helps troubleshooting routing issues where missing Type 3 LSAs might indicate ABR configuration problems or area connectivity issues, and verifying Type 3 contents helps confirm proper route advertisement between areas.
Question 198
An administrator is configuring wireless guest access with web authentication. Which component presents the authentication portal page to guest users?
A) RADIUS server exclusively
B) Wireless LAN Controller with portal
C) Access point independently
D) External DHCP server
Answer: B)
Explanation:
Wireless LAN Controller with portal functionality presents the authentication portal page to guest users, intercepting HTTP/HTTPS traffic and redirecting browsers to the captive portal for credential entry or acceptable use policy acknowledgment before granting network access. Guest wireless access requires balancing security with user convenience, providing internet access while protecting internal resources and maintaining accountability.
Web authentication is ideal for guest scenarios because it works with any device having a web browser without requiring software installation or complex configuration, making it accessible for visitors. The architecture involves multiple components working together. The WLC is configured with guest WLANs using web authentication, defining captive portal settings including portal page customization with branding, logos, and terms of service, authentication method selection choosing between local user database, RADIUS server, or external portal, and post-authentication actions like redirection to specific URLs. Guest users connect to the guest SSID and receive IP addressing typically on a guest VLAN isolated from corporate resources. When users attempt browsing, the WLC intercepts the initial HTTP or HTTPS request and redirects the browser to the captive portal page.
The portal presents login interface where users enter credentials or accept terms, or displays sponsor approval requirements for sponsor-based guest access. Upon successful authentication or acceptance, the WLC updates its client database marking the user as authenticated and allowing internet access while maintaining corporate network isolation. Session timers automatically terminate guest access after configured duration. Portal customization options include uploading custom logos and backgrounds matching corporate branding, editing HTML/CSS for portal appearance, configuring multiple language support for international guests, defining acceptable use policies requiring acknowledgement, and implementing custom success pages with helpful information like Wi-Fi coverage areas or business information.
Authentication backend options provide flexibility where local user database stores guest credentials directly on WLC suitable for small deployments or emergency access, RADIUS integration with ISE or ACS provides scalable credential management and policy enforcement, external web authentication redirects to external portal servers for advanced customization or integration with existing guest management systems, and social login allows authentication using Facebook, Google, or LinkedIn credentials. Sponsor-based workflows require employees to sponsor guests by creating temporary credentials, providing accountability for guest actions. The WLC enforces policy after authentication through ACLs limiting guest access to internet only while blocking internal resources, QoS profiles throttling bandwidth per guest preventing abuse, and session limits controlling maximum concurrent guests per access point or WLC.
Question 199
A network engineer is implementing Cisco SD-WAN. Which component serves as the centralized control plane orchestrator in SD-WAN architecture?
A) vEdge router exclusively
B) vSmart controller
C) vBond orchestrator only
D) vManage NMS alone
Answer: B)
Explanation:
vSmart controller serves as the centralized control plane orchestrator in SD-WAN architecture, maintaining the overlay network topology, distributing routing and policy information to edge routers, and implementing centralized control while allowing distributed data plane forwarding. Cisco SD-WAN architecture consists of multiple components with distinct roles. vEdge routers (or cEdge routers running IOS-XE) are deployed at branch offices and data centers, forming the SD-WAN overlay by establishing secure tunnels, forwarding traffic based on policies received from vSmart, and reporting telemetry to vManage.
These routers are the data plane performing packet forwarding. vSmart controllers are the control plane implementing OMP (Overlay Management Protocol) which is SD-WAN’s routing protocol, aggregating routes from all vEdge routers and distributing reachability information, enforcing centralized policies including application-aware routing and service chaining, and maintaining OMP sessions with all vEdge routers. Multiple vSmart controllers provide redundancy and scalability in production deployments.
vBond orchestrator performs initial authentication and orchestration during vEdge onboarding, providing zero-touch provisioning where new vEdge routers discover vBond, authenticate using certificates, and learn the IP addresses of vSmart and vManage controllers. vManage is the management and monitoring platform providing unified GUI for configuration, real-time and historical analytics for network performance and application visibility, centralized policy configuration with templates and device groups, and alarms and troubleshooting tools. The architecture implements software-defined principles where control plane (vSmart) is separated from data plane (vEdge), centralized policy management provides consistent enforcement across distributed locations, and automation reduces manual configuration and human errors.
OMP operates between vEdge and vSmart similar to BGP for traditional networks, advertising routes including service-side routes from local networks, TLOC routes describing overlay tunnel endpoints and their characteristics, and service routes for service chaining. vSmart applies policies to OMP advertisements implementing topology-based policies controlling which routes are advertised to which sites, control policies manipulating routing attributes like preference, and data policies controlling traffic forwarding behavior including application-aware routing. vEdge routers receive routing and policy information from vSmart, install best routes in their routing table, and make forwarding decisions locally without per-packet control plane involvement, enabling distributed forwarding with centralized control. This architecture scales to thousands of branch locations while maintaining centralized visibility and policy control.
Question 200
An administrator needs to configure switch port security to generate SNMP traps when violations occur without shutting down the port. Which violation mode should be configured?
A) Shutdown mode
B) Restrict mode
C) Protect mode
D) Monitor mode
Answer: B)
Explanation:
Restrict mode generates SNMP traps and syslog messages when security violations occur, increments violation counters, and drops traffic from unauthorized MAC addresses while keeping the port operational and allowing authorized traffic to continue flowing. Port security violation modes determine switch behavior when frames arrive from MAC addresses exceeding the maximum configured addresses or not in the allowed address list. Understanding each mode’s characteristics enables appropriate security policy implementation. Shutdown mode provides the strictest security by immediately placing the port into error-disabled state when violations occur, stopping all traffic and requiring administrator intervention through “shutdown” then “no shutdown” commands or automatic recovery via error-disable recovery with timers.
This mode ensures maximum security response but causes operational disruption affecting all devices on the port including authorized ones. Shutdown mode is appropriate for high-security environments where policy violations warrant immediate port isolation and investigation before restoration. Restrict mode offers balanced approach by dropping frames from violating MAC addresses while allowing frames from authorized addresses, generating SNMP trap and syslog notification alerting administrators of the violation, incrementing security violation counter visible in “show port-security interface” output, and keeping the port in operational state avoiding service disruption to legitimate traffic.
This mode enables security monitoring and enforcement without the downtime associated with shutdown mode, suitable for environments where violations should be prevented and logged but port availability must be maintained. Protect mode silently drops violating frames and allows authorized frames without generating any notifications or incrementing counters, providing security enforcement with minimal operational overhead but no visibility into violation attempts. This mode is appropriate when security enforcement is needed but logging overhead or alert flooding is a concern, though the lack of visibility makes it difficult to detect security issues or misconfigurations. The restrict mode specifically fulfills requirements for SNMP trap generation while maintaining port operation, making it the correct answer.
Configuration combines restrict mode with other port security settings including maximum MAC address count limiting how many addresses are permitted, MAC address learning method choosing dynamic, sticky, or static, and aging configuration removing learned addresses after inactivity. SNMP trap generation requires enabling SNMP on the switch with “snmp-server enable traps port-security” command and configuring SNMP server destinations to receive traps. Verification monitors violation counters and trap transmission confirming proper operation.