Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 41
A network engineer needs to configure OSPF to prevent routing updates from being sent on an interface while still allowing OSPF to advertise the connected network. Which command should be used?
A) ip ospf network point-to-point
B) passive-interface
C) ip ospf priority 0
D) area stub
Answer: B
Explanation:
OSPF interface behavior control is essential for secure and efficient routing. Passive-interface prevents OSPF from sending Hello packets and forming adjacencies on specified interfaces, while still advertising the connected network into OSPF, reducing unnecessary protocol traffic, preventing unauthorized neighbor relationships, conserving bandwidth and CPU resources, and representing security best practice for edge interfaces.
Passive interfaces maintain the network in the OSPF database and advertise it to other routers, but suppress Hello packets, prevent neighbor discovery, block routing updates transmission, and protect networks from accidental or malicious OSPF adjacencies.
Common use cases include interfaces connected to end-user networks where no OSPF neighbors exist, DMZ segments requiring route advertisement without protocol exposure, connections to non-OSPF devices, security boundaries, and anywhere OSPF advertisement needed without neighbor formation.
Configuration approaches show per-interface configuration using passive-interface command under router process, default passive with selective activation using passive-interface default plus no passive-interface for specific interfaces, and verification through show ip ospf interface.
Security benefits include preventing unauthorized routers from forming adjacencies, reducing attack surface, protecting against routing protocol attacks, eliminating unnecessary protocol traffic on edge segments, and maintaining network advertisement control.
Resource optimization reduces CPU utilization from unnecessary Hello processing, conserves bandwidth on low-speed links, eliminates pointless adjacency attempts, focuses OSPF resources on legitimate neighbors, and improves overall protocol efficiency.
Alternative considerations show not using passive-interface when adjacencies needed, combining with authentication for active interfaces, implementing route filtering as additional control, and considering area design for isolation.
Best practices recommend making edge interfaces passive by default, explicitly enabling OSPF only where needed, documenting passive interface decisions, auditing configurations regularly, combining with authentication on active interfaces, and testing neighbor formations.
Why other options are incorrect:
A) Point-to-point network type changes OSPF behavior on interface, still sends Hellos and forms adjacencies, doesn’t prevent routing updates, and doesn’t achieve security objective.
B) OSPF priority 0 prevents router from becoming DR/BDR, doesn’t stop Hello packets, still forms adjacencies normally, and doesn’t address requirement of preventing routing updates.
D) Area stub configuration affects LSA types in entire area, doesn’t control per-interface behavior, still allows adjacencies within area, and doesn’t prevent routing updates on specific interface.
Question 42
A company needs to implement redundant Internet connections with automatic failover. Which routing protocol is most appropriate for multi-homed Internet connectivity?
A) OSPF
B) EIGRP
C) BGP
D) RIP
Answer: C
Explanation:
Multi-homed Internet connectivity requires appropriate routing protocol selection. BGP (Border Gateway Protocol) is designed specifically for Internet routing and multi-homing scenarios, provides path control between autonomous systems, supports policy-based routing decisions, enables redundancy with multiple ISPs, offers granular traffic engineering capabilities, and represents standard protocol for enterprise Internet connectivity.
BGP exchanges routing information between autonomous systems, maintains path attributes for intelligent routing decisions, supports provider-independent addressing, enables inbound and outbound traffic control, and provides stability through path selection algorithms.
Multi-homing benefits include redundancy through multiple Internet connections, automatic failover to backup links, load distribution across connections, provider diversity eliminating single point of failure, and improved reliability for critical applications.
BGP advantages show path manipulation through AS-PATH, local preference, and MED attributes, policy-based routing for traffic engineering, scalability for Internet-scale routing, support for IPv4 and IPv6, and industry-standard inter-AS protocol.
Redundancy mechanisms enable configuring multiple BGP peering sessions, implementing automatic failover on link failure, preferring primary paths through attribute manipulation, detecting failures through keepalive mechanisms, and maintaining routing table consistency.
Implementation scenarios include dual-homed to single ISP, multi-homed to different ISPs, using provider-independent address space, implementing complex traffic policies, and requiring granular routing control.
Configuration considerations require obtaining AS number (private or public), establishing eBGP sessions with ISPs, implementing appropriate filtering, configuring path preference attributes, planning IP addressing strategy, and coordinating with ISP technical teams.
Best practices recommend using BGP for multi-homing scenarios, implementing redundant BGP speakers, filtering routes appropriately, documenting routing policies, monitoring BGP sessions, maintaining ISP relationships, and planning for growth.
Why other options are incorrect:
A) OSPF is interior gateway protocol, designed for internal networks, not between autonomous systems, doesn’t interact with ISP routing, and inappropriate for Internet connectivity.
B) EIGRP is Cisco proprietary IGP, not supported by ISPs, designed for enterprise internal routing, doesn’t provide Internet routing capabilities, and can’t establish sessions with ISP routers.
D) RIP is legacy distance vector protocol, extremely limited scalability, maximum 15-hop limit, slow convergence, not used for Internet routing, and completely inadequate for modern requirements.
Question 43
An engineer must configure a switch to allow only one MAC address per port and shutdown the port if violated. Which port security configuration achieves this?
A) Maximum 1, violation mode protect
B) Maximum 1, violation mode restrict
C) Maximum 1, violation mode shutdown
D) Maximum 2, violation mode shutdown
Answer: C
Explanation:
Port security enforcement requires proper violation mode selection. Maximum 1 with violation mode shutdown provides strongest security posture by allowing exactly one MAC address per port, immediately disabling port when unauthorized MAC address detected, requiring manual intervention for recovery, preventing security breaches effectively, generating security alerts, and representing most secure configuration.
Shutdown violation mode places port in err-disabled state on violation, blocks all traffic completely, requires administrator action for recovery, provides clear security event indication, and prevents continued attack attempts.
Security enforcement shows maximum 1 limiting port to single device, shutdown mode disabling port immediately on violation, err-disabled state requiring manual recovery through shutdown/no shutdown, preventing MAC address spoofing, and eliminating unauthorized device access.
Violation detection triggers on second MAC address appearing, attempts to change MAC address, unauthorized device connection, MAC address spoofing attempts, and any deviation from learned MAC.
Recovery process requires identifying violation cause, removing unauthorized device, investigating security incident, administratively recovering port with shutdown/no shutdown or errdisable recovery, and implementing preventive measures.
Monitoring requirements include configuring SNMP traps for violations, reviewing syslog messages, checking port security counters, maintaining violation logs, and analyzing security patterns.
Use cases include securing conference room jacks, protecting specific high-security areas, preventing hub connections, enforcing one-device-per-port policies, and meeting compliance requirements.
Alternative approaches show sticky MAC addresses for dynamic learning, aging for temporary violations, combining with 802.1X for stronger authentication, and using restrict mode for less critical ports.
Best practices recommend using shutdown mode for security-sensitive environments, documenting recovery procedures, implementing monitoring, training staff on port security, testing violation scenarios, and maintaining security documentation.
Why other options are incorrect:
A) Protect mode silently drops violations, generates no alerts, provides no administrative visibility, defeats security monitoring purpose, and inadequate for security-sensitive requirements.
B) Restrict mode drops violations but doesn’t shutdown port, allows authorized traffic to continue, generates alerts, less severe than required, and doesn’t meet “shutdown port” requirement.
D) Maximum 2 allows two MAC addresses, doesn’t achieve “only one MAC address” requirement, permits unauthorized second device, and weakens security posture.
Question 44
A network administrator needs to configure DHCPv6 to provide both addresses and additional options like DNS servers. Which DHCPv6 mode should be used?
A) Stateless DHCPv6
B) Stateful DHCPv6
C) SLAAC only
D) DHCPv6-PD
Answer: B
Explanation:
IPv6 address assignment methods vary in control and capabilities. Stateful DHCPv6 provides complete address management including assigning IPv6 addresses from defined pools, tracking address assignments with lease management, providing DNS servers and other options, maintaining client state information, enabling centralized address control, and representing most comprehensive DHCPv6 approach.
Stateful DHCPv6 operates similarly to DHCPv4, allocating addresses from configured pools, managing lease durations, tracking assignments in database, providing complete configuration parameters, and enabling administrative control.
Address assignment process shows client sending SOLICIT message, server responding with ADVERTISE, client requesting with REQUEST, server confirming with REPLY including address and options, and lease renewal through RENEW messages.
Configuration options include IPv6 address allocation, DNS server addresses, domain search lists, NTP servers, SIP servers, and other DHCP options supporting comprehensive client configuration.
Stateful advantages provide centralized address management, address assignment tracking, conflict prevention through managed allocation, integration with existing DHCP infrastructure, and compliance with policies requiring address control.
Router Advertisement flags require M-flag (Managed) set to 1 instructing clients to use DHCPv6 for addresses, O-flag configuration for other options, and proper flag combination directing client behavior.
Use cases include environments requiring address tracking, compliance-driven deployments, networks needing strict address control, integration with existing DHCP management tools, and scenarios requiring comprehensive client configuration.
Implementation considerations involve configuring DHCPv6 server with pools, setting appropriate lease times, defining option parameters, configuring router advertisements with proper flags, and testing client assignment.
Best practices recommend documenting addressing scheme, monitoring pool utilization, implementing redundant servers, configuring appropriate lease durations, maintaining consistent configuration, and planning for growth.
Why other options are incorrect:
A) Stateless DHCPv6 provides options like DNS but not addresses, clients use SLAAC for addresses, partial solution, and doesn’t meet requirement for “providing both addresses and options.”
C) SLAAC only generates addresses from router advertisements, provides no DHCPv6 options, lacks DNS server configuration, and insufficient for complete configuration requirement.
D) DHCPv6-PD (Prefix Delegation) delegates IPv6 prefixes to routers, used for WAN connections, doesn’t provide host addresses, and serves completely different purpose.
Question 45
An engineer must configure HSRP to ensure a specific router becomes active. Which configuration parameter should be modified?
A) Preempt
B) Priority
C) Timers
D) Track
Answer: B
Explanation:
HSRP active router selection requires understanding election process. Priority determines which router becomes active in HSRP group, with highest priority winning election, defaulting to 100, ranging from 0 to 255, providing administrative control over active router selection, and representing primary mechanism for influencing HSRP election.
Priority configuration directly controls active router selection, with highest priority router becoming active, tie-breaking using highest IP address, dynamic adjustment through tracking, and immediate impact on election results.
Priority configuration involves setting priority value on desired active router higher than others, typically using 110 for primary and default 100 for backup, configuring under HSRP group, and combining with preempt for automatic takeover.
Election process shows routers comparing priorities during initialization, highest priority becoming active, others entering standby or speak state, priorities communicated in Hello packets, and elections occurring on priority changes.
Preempt relationship shows priority determining which router should be active, preempt enabling higher-priority router to takeover from current active, combining both for complete control, and understanding that priority alone doesn’t force takeover without preempt.
Dynamic priority adjustment uses track feature reducing priority on specific conditions, monitoring interface or object state, automatically failing over when issues detected, and implementing intelligent redundancy.
Common priority values include 110 for preferred primary, 100 as default, 90 for secondary backup, and appropriate spacing allowing intermediate values for three or more routers.
Verification commands show priority with show standby, display active router selection, indicate preempt status, and reveal tracking configuration.
Best practices recommend setting priority explicitly on all routers, using consistent priority scheme, documenting priority decisions, combining with preempt, implementing tracking for intelligence, and testing failover scenarios.
Why other options are incorrect:
A) Preempt allows higher-priority router to takeover, doesn’t determine which router should be active, enables priority to take effect, necessary but not sufficient alone, and priority must be set first.
B) Timers control Hello and hold time intervals, affect failure detection speed, don’t influence which router becomes active, and serve different purpose.
D) Track adjusts priority based on conditions, modifies priority dynamically, but priority setting itself determines active router, and tracking supplements rather than replaces priority configuration.
Question 46
A company needs to implement network segmentation using VLANs. Which switch port mode allows traffic from multiple VLANs?
A) Access mode
B) Trunk mode
C) Dynamic auto
D) Dynamic desirable
Answer: B
Explanation:
VLAN traffic segregation requires appropriate port configuration. Trunk mode carries traffic from multiple VLANs simultaneously, uses 802.1Q or ISL encapsulation, tags frames with VLAN IDs, connects switches together, enables inter-switch VLAN communication, and represents essential configuration for VLAN distribution.
Trunk ports tag Ethernet frames with VLAN information, allowing receiving device to determine VLAN membership, supporting up to 4096 VLANs, maintaining VLAN segregation across links, and enabling scalable VLAN designs.
Trunk operation shows trunk adding 4-byte 802.1Q tag to frames, including VLAN ID in tag, receiving switch reading VLAN ID, forwarding to appropriate VLAN ports, and maintaining proper VLAN separation.
Native VLAN handles untagged frames on trunk, defaults to VLAN 1, should be changed for security, must match on both trunk ends, and requires careful management.
Configuration requirements include setting switchport mode trunk, specifying allowed VLANs, configuring encapsulation if needed, setting native VLAN, and verifying trunk operation.
Use cases include connecting switches in distribution layer, linking access to distribution switches, connecting to routers for inter-VLAN routing, attaching wireless controllers supporting multiple WLANs, and any scenario requiring multiple VLAN transport.
Security considerations involve changing native VLAN from default, pruning unused VLANs, implementing VLAN access lists, preventing VLAN hopping attacks, and monitoring trunk ports.
Best practices recommend using 802.1Q encapsulation, configuring trunks statically, changing native VLAN, allowing only necessary VLANs, documenting trunk purpose, and disabling DTP.
Why other options are incorrect:
A) Access mode assigns port to single VLAN, carries traffic for one VLAN only, doesn’t tag frames, used for end devices, and doesn’t support multiple VLANs.
C) Dynamic auto waits for DTP negotiation, doesn’t inherently carry multiple VLANs, creates security risks, deprecated configuration method, and should be avoided.
D) Dynamic desirable actively negotiates trunking, security concern like dynamic auto, not explicit trunk mode, better to configure statically, and also deprecated practice.
Question 47
An engineer needs to configure a GRE tunnel between two routers. Which protocol provides the encapsulation for GRE tunnels?
A) IP protocol 47
B) IP protocol 50
C) TCP port 1723
D) UDP port 500
Answer: A
Explanation:
GRE (Generic Routing Encapsulation) tunnel implementation requires understanding protocol mechanics. IP protocol 47 provides GRE encapsulation, carrying various network layer protocols over IP networks, creating virtual point-to-point links, enabling routing protocol operation across non-contiguous networks, supporting multicast and broadcast traffic, and representing versatile tunneling solution.
GRE encapsulates original packets inside new IP packets, uses protocol number 47 in IP header, adds GRE header between outer and inner IP headers, maintains original packet integrity, and enables tunneling various protocols.
GRE characteristics include stateless protocol operation, no built-in encryption, support for multicast and routing protocols, simple configuration, low overhead, and compatibility with various payload protocols.
Tunnel configuration requires defining tunnel source and destination, specifying tunnel interface, configuring IP addressing on tunnel, enabling routing protocol if needed, and verifying connectivity.
Use cases include connecting non-contiguous network segments, extending routing domains across Internet, enabling multicast over unicast infrastructure, implementing site-to-site connectivity, and supporting dynamic routing protocols.
Security considerations show GRE lacking native encryption, requiring IPsec for confidentiality, vulnerable to spoofing without protection, needing access control, and typically combined with IPsec in production.
GRE over IPsec protects GRE traffic with encryption, provides authentication and integrity, enables secure tunneling, combines GRE flexibility with IPsec security, and represents common deployment pattern.
Troubleshooting involves verifying IP protocol 47 permitted through firewalls, checking tunnel endpoint reachability, validating tunnel interface status, confirming routing over tunnel, and monitoring tunnel statistics.
Best practices recommend combining GRE with IPsec for security, using tunnel protection feature, implementing keepalives, monitoring tunnel health, documenting tunnel purpose, and considering alternatives like IPsec VTI.
Why other options are incorrect:
B) IP protocol 50 is ESP (Encapsulating Security Payload) for IPsec, provides encryption, different protocol than GRE, used in IPsec VPNs, and not GRE encapsulation.
C) TCP port 1723 is used by PPTP (Point-to-Point Tunneling Protocol), different VPN technology, not GRE though PPTP uses GRE, and port 1723 is control channel not encapsulation.
D) UDP port 500 is IKE (Internet Key Exchange) for IPsec, negotiates security associations, part of IPsec not GRE, and serves key management function.
Question 48
A network administrator must configure EtherChannel to load balance based on both source and destination IP addresses. Which load balancing method should be used?
A) src-mac
B) dst-ip
C) src-dst-ip
D) src-port
Answer: C
Explanation:
EtherChannel load balancing optimization requires appropriate hashing algorithm. Src-dst-ip distributes traffic based on combination of source and destination IP addresses, provides better distribution than single parameter, considers both endpoints for hashing calculation, enables more granular load sharing, optimizes link utilization, and represents commonly recommended method for Layer 3 load balancing.
EtherChannel load balancing uses hashing algorithm on specified fields, calculating hash value from packet headers, mapping to specific physical link, maintaining flow consistency, preventing packet reordering, and distributing traffic intelligently.
Hashing considerations show src-dst-ip providing best distribution for diverse traffic patterns, considering both source and destination creating more hash values, improving load distribution across links, maintaining session persistence, and optimizing bandwidth utilization.
Load balancing options include source MAC for Layer 2, destination MAC for Layer 2, source IP for Layer 3, destination IP for Layer 3, src-dst-mac for Layer 2, src-dst-ip for Layer 3, and port-based for Layer 4.
Configuration command sets global load balance method using port-channel load-balance command, applies to all EtherChannels on switch, requires selection appropriate to traffic type, and verifies with show etherchannel load-balance.
Traffic pattern impact shows src-dst-ip performing well with multiple sources to multiple destinations, single source scenarios benefiting from dst-ip, multiple clients to single server using src-ip, and server farm to internet using src-dst-ip.
Flow consistency maintains packets from same session on single link, prevents reordering within flows, ensures application compatibility, provides stable path per conversation, and maintains TCP performance.
Best practices recommend analyzing traffic patterns before selection, using src-dst-ip for balanced environments, considering Layer 4 parameters when needed, monitoring link utilization, testing load distribution, and adjusting if imbalance observed.
Why other options are incorrect:
A) Src-mac uses only source MAC address, poor distribution in router-to-router scenarios where source MAC constant, better for Layer 2 access switches, and doesn’t meet “both source and destination IP” requirement.
B) Dst-ip uses only destination IP, doesn’t consider source, provides limited distribution when traffic destined to few destinations, and requirement specifies “both source and destination.”
D) Src-port uses only source port number, Layer 4 parameter, doesn’t include destination, and requirement specifically asks for IP address-based method.
Question 49
An engineer must implement QoS to guarantee bandwidth for critical applications. Which queuing mechanism provides strict bandwidth guarantee?
A) FIFO queuing
B) Priority queuing (LLQ)
C) Weighted Fair Queuing (WFQ)
D) Class-Based Weighted Fair Queuing (CBWFQ)
Answer: B
Explanation:
QoS bandwidth guarantees require appropriate queuing mechanisms. Priority queuing (Low Latency Queuing) provides strict priority with guaranteed bandwidth, services priority queue before other queues, ensures low latency and jitter for critical traffic, polices priority traffic to prevent starvation, enables voice and real-time application support, and represents strongest bandwidth guarantee mechanism.
LLQ implements strict priority scheduling for designated traffic class, guaranteeing immediate service up to configured bandwidth, policing excess priority traffic, protecting other queues from starvation, and combining with CBWFQ for remaining classes.
LLQ operation shows priority queue serviced first regardless of other traffic, configured bandwidth guaranteed for priority class, excess priority traffic policed and dropped, remaining bandwidth shared among non-priority classes, and protection against priority queue monopolizing bandwidth.
Configuration elements include defining traffic classes with class-maps, marking traffic appropriately, applying policy with priority command, specifying guaranteed bandwidth, and configuring remaining classes with bandwidth or fair-queue.
Bandwidth guarantee provides specified amount always available to priority traffic, prevents starvation through policing, ensures consistent performance, supports SLA requirements, and enables predictable application behavior.
Use cases include VoIP traffic requiring consistent low latency, video conferencing needing jitter control, critical business applications demanding guaranteed performance, real-time industrial control, and any application intolerant of delay.
Policing mechanism prevents priority traffic from exceeding configured bandwidth, drops excess packets, protects other queues, maintains QoS policy integrity, and prevents single class monopolization.
Best practices recommend limiting priority class to 33% of bandwidth, marking traffic at network edge, implementing CAC (Call Admission Control), monitoring queue depths, validating QoS effectiveness, and documenting QoS policies.
Why other options are incorrect:
A) FIFO queuing provides no prioritization, first packet in first out, no bandwidth guarantee, no differentiation between traffic types, and completely inadequate for QoS requirements.
C) WFQ provides fair allocation, no strict guarantee, flow-based fairness, good for default traffic, but doesn’t provide explicit bandwidth guarantee for specific class.
D) CBWFQ guarantees minimum bandwidth, not strict priority, shares bandwidth fairly when congested, better than WFQ but doesn’t provide strict priority service required for “guarantee bandwidth.”
Question 50
A company needs to implement wireless authentication using digital certificates. Which EAP method provides certificate-based authentication?
A) LEAP
B) PEAP-MSCHAPv2
C) EAP-TLS
D) EAP-FAST
Answer: C
Explanation:
Wireless security authentication methods vary in strength and implementation. EAP-TLS (Extensible Authentication Protocol – Transport Layer Security) provides strongest wireless authentication using digital certificates, requires certificates on both client and server, provides mutual authentication, eliminates password vulnerabilities, enables PKI integration, and represents most secure EAP method.
EAP-TLS authenticates both wireless client and RADIUS server using certificates, establishes encrypted tunnel using TLS, provides strong mutual authentication, eliminates password transmission, and leverages PKI infrastructure.
Certificate requirements include CA (Certificate Authority) infrastructure, server certificates on RADIUS servers, client certificates distributed to users/devices, certificate validation during authentication, and proper certificate lifecycle management.
Authentication process shows client presenting certificate to server, server validating client certificate, client validating server certificate, mutual authentication completing, and secure connection establishing.
Security advantages include strongest authentication method available, eliminates password attacks, provides mutual authentication, integrates with existing PKI, supports non-repudiation, and enables centralized certificate management.
Implementation considerations require deploying certificate infrastructure, distributing client certificates, managing certificate lifecycle, configuring RADIUS servers, and training users on certificate handling.
Use cases include high-security environments, compliance requirements demanding strong authentication, organizations with existing PKI, preventing credential theft, and deployments prioritizing security over convenience.
Certificate management involves issuing certificates to users, revoking compromised certificates, renewing expiring certificates, maintaining CRL or OCSP, and monitoring certificate status.
Best practices recommend using EAP-TLS for highest security, implementing proper certificate management, automating distribution where possible, monitoring certificate expiration, maintaining CA security, and documenting authentication architecture.
Why other options are incorrect:
A) LEAP is Cisco proprietary, deprecated due to vulnerabilities, uses username/password not certificates, weak authentication, and should not be deployed.
B) PEAP-MSCHAPv2 uses server certificate only, client authentication with username/password, doesn’t provide certificate-based client authentication, and less secure than EAP-TLS.
D) EAP-FAST uses PACs (Protected Access Credentials), not certificates, designed for performance, provides good security, but doesn’t meet certificate-based requirement.
Question 51
An administrator needs to configure a switch to forward multicast traffic only to ports with interested receivers. Which protocol should be enabled?
A) IGMP
B) IGMP Snooping
C) PIM
D) CGMP
Answer: B
Explanation:
Layer 2 multicast optimization requires protocol understanding. IGMP Snooping enables switches to intelligently forward multicast traffic only to ports with interested receivers, listens to IGMP messages between hosts and routers, builds multicast forwarding table, prevents multicast flooding, optimizes bandwidth utilization, and represents essential Layer 2 multicast feature.
IGMP Snooping examines IGMP membership reports and queries, identifies which ports have group members, maintains multicast MAC address table, forwards traffic only to interested ports, and prevents unnecessary flooding.
Operation mechanism shows switch monitoring IGMP messages, learning group membership per port, building multicast forwarding database, forwarding to specific ports only, and pruning unnecessary traffic.
Performance benefits include eliminating multicast flooding, reducing bandwidth consumption, preventing delivery to uninterested hosts, optimizing switch resources, and improving network efficiency.
IGMP message types monitored include membership queries from routers, membership reports from hosts, leave messages for departures, and version-specific messages for compatibility.
Querier functionality requires IGMP Snooping querier on VLAN without multicast router, maintaining group state, sending queries, and ensuring IGMP Snooping works without router.
Configuration considerations involve enabling globally and per-VLAN, configuring querier if needed, setting fast-leave for immediate pruning, defining static groups if required, and verifying operation.
Troubleshooting includes checking IGMP Snooping status, verifying multicast groups, examining port membership, reviewing VLAN configuration, and validating router port identification.
Best practices recommend enabling IGMP Snooping on all VLANs, configuring querier when no router present, using fast-leave carefully, monitoring group membership, documenting multicast design, and testing multicast flows.
Why other options are incorrect:
A) IGMP operates between hosts and routers, not within switches, Layer 3 protocol, doesn’t control switch port forwarding, and IGMP Snooping monitors IGMP not replaces it.
C) PIM (Protocol Independent Multicast) is router-to-router protocol, builds distribution trees, operates at Layer 3, doesn’t control switch ports, and serves different purpose.
D) CGMP (Cisco Group Management Protocol) is legacy Cisco proprietary, replaced by IGMP Snooping, deprecated technology, and no longer recommended or commonly used.
Question 52
A network engineer must configure a switch stack to prevent split-brain scenario. Which feature ensures stack integrity?
A) StackWise Virtual
B) VSS (Virtual Switching System)
C) Stack-Power
D) Switch stack priority
Answer: A
Explanation:
Switch stacking requires split-brain prevention mechanisms. StackWise Virtual creates virtual switch from two physical switches, uses dual-active detection to prevent split-brain, maintains single control plane, provides high availability, enables seamless failover, and represents modern Cisco stacking technology.
Split-brain scenarios occur when stack partitions into multiple independent entities, causing duplicate active devices, conflicting configurations, network disruption, and requiring prevention through dual-active detection.
StackWise Virtual components include StackWise Virtual Link for control communication, Dual-Active Detection for split detection, single control plane for consistency, hitless failover capability, and multi-chassis EtherChannel support.
Dual-Active Detection methods show Fast-Hello using dedicated link between switches, Enhanced PAgP monitoring EtherChannel bundles, and both methods detecting split-brain conditions immediately.
Split prevention uses detection mechanisms identifying partition, bringing one switch down automatically, preventing dual-active operation, maintaining network integrity, and enabling faster recovery.
Configuration requirements include enabling StackWise Virtual, configuring virtual links, implementing dual-active detection, setting priorities, and validating redundancy.
Advantages provide active-active forwarding, simplified management, consistent configuration, high availability, and improved resource utilization compared to traditional stacking.
Comparison with alternatives shows StackWise Virtual supporting two switches, traditional stacking supporting up to nine, VSS being predecessor technology, and StackWise Virtual representing current best practice.
Best practices recommend implementing dual-active detection, using redundant virtual links, configuring appropriate priorities, testing failover scenarios, monitoring stack health, and maintaining software consistency.
Why other options are incorrect:
B) VSS is predecessor technology for specific platforms, similar concept, but StackWise Virtual is modern implementation, and VSS being phased out in newer platforms.
C) Stack-Power manages power distribution in stack, doesn’t prevent split-brain, different function, and unrelated to control plane integrity.
D) Stack priority determines master selection, doesn’t prevent split-brain, helps with election, but doesn’t provide dual-active detection needed.
Question 53
An engineer needs to configure a router to translate IPv6 addresses to IPv4 for communication with legacy systems. Which NAT type is required?
A) Static NAT
B) NAT64
C) PAT
D) Twice NAT
Answer: B
Explanation:
IPv6 to IPv4 translation requires specialized NAT mechanisms. NAT64 translates between IPv6 and IPv4 addresses, enables IPv6-only hosts to access IPv4 resources, provides protocol translation, maintains stateful translation mapping, eliminates need for dual-stack on all devices, and represents essential transition technology.
NAT64 works with DNS64, translating IPv4 addresses into IPv6 format, maintaining translation state, converting protocols bidirectionally, enabling seamless communication, and supporting IPv6 adoption.
Translation process shows IPv6 host querying DNS64, receiving synthesized IPv6 address, sending packets to NAT64 gateway, NAT64 translating to IPv4, communicating with IPv4 server, translating responses back, and returning to IPv6 host.
Address format uses well-known prefix 64:ff9b::/96 by default, embeds IPv4 address in last 32 bits, enables algorithmic translation, supports custom prefixes, and maintains address relationship.
DNS64 integration translates DNS queries for IPv4 addresses, synthesizes AAAA records, provides IPv6-formatted addresses, works with NAT64 for complete solution, and enables discovery.
Use cases include IPv6-only networks accessing IPv4 Internet, transitioning to IPv6 while maintaining IPv4 connectivity, service provider IPv6 deployment, reducing IPv4 address requirements, and supporting IPv6 adoption.
Implementation considerations require deploying NAT64 gateway, configuring DNS64 server, defining address pools, setting translation policies, and testing connectivity.
Limitations include applications using embedded IP addresses may fail, some protocols not translating well, security protocols requiring special handling, and FTP requiring ALG support.
Best practices recommend using standard prefix when possible, implementing logging for troubleshooting, monitoring translation table, planning address pools, testing applications, and documenting translation policy.
Why other options are incorrect:
A) Static NAT provides one-to-one address mapping, doesn’t translate between protocol versions, works within the same IP version, and is insufficient for IPv6-to-IPv4 translation.
C) PAT (Port Address Translation) overloads addresses with ports, doesn’t translate between IPv6 and IPv4, operates within a single IP version, and doesn’t provide protocol translation.
D) Twice NAT translates both source and destination addresses, operates within single IP version, used for overlapping addresses, and doesn’t translate between IPv6 and IPv4 protocols.
Question 54
A company needs to implement routing protocol authentication to prevent unauthorized routing updates. Which authentication method provides the strongest security for OSPF?
A) Plaintext authentication
B) MD5 authentication
C) SHA authentication
D) No authentication
Answer: C
Explanation:
OSPF routing security requires strong authentication mechanisms. SHA (Secure Hash Algorithm) authentication provides strongest cryptographic protection for OSPF updates, uses HMAC-SHA-256 or stronger algorithms, prevents routing table manipulation, protects against man-in-the-middle attacks, eliminates password vulnerabilities, and represents modern security best practice for routing protocols.
SHA authentication introduced in OSPFv2 with SHA-HMAC support, offering significantly stronger cryptographic protection than MD5, using longer hash values (256-bit vs 128-bit), resistant to collision attacks, aligned with modern security standards, and recommended for new deployments.
Security advantages include stronger cryptographic algorithms resistant to attacks, longer hash outputs providing better security, protection against rainbow table attacks, compliance with modern security standards, and future-proof authentication approach.
Authentication process shows OSPF computing HMAC-SHA hash over packet contents, including shared key in calculation, appending authentication data to packet, receiving router validating hash, and accepting only properly authenticated packets.
Key management requires configuring key chains with keys, setting key strings as secrets, configuring send and accept lifetimes for rotation, applying key chain to interfaces, and maintaining consistent keys across neighbors.
Implementation considerations involve configuring authentication on all OSPF routers in area, using consistent key chains, enabling authentication per interface or area-wide, testing neighbor adjacencies maintain, and verifying authentication through debug commands.
Migration from MD5 includes configuring both MD5 and SHA initially, phasing out MD5 over time, testing thoroughly during transition, maintaining neighbor relationships, and documenting upgrade process.
Compatibility shows SHA authentication requiring software support, checking platform capabilities, ensuring all routers support SHA, falling back to MD5 if needed, and planning upgrades appropriately.
Best practices recommend using SHA for all new deployments, implementing key rotation policies, maintaining secure key storage, monitoring authentication failures, documenting authentication design, and planning security maintenance windows.
Why other options are incorrect:
A) Plaintext authentication sends passwords unencrypted, easily captured, provides minimal security, deprecated for security reasons, and completely inadequate for modern networks.
B) MD5 authentication is legacy method, vulnerable to collision attacks, weaker than SHA, still widely used but deprecated, and SHA provides stronger security.
D) No authentication allows any router to inject updates, completely insecure, vulnerable to routing attacks, enables network compromise, and should never be used in production.
Question 55
An administrator must configure AAA to authenticate users against multiple sources with fallback. Which authentication method list configuration provides this capability?
A) aaa authentication login default group radius local
B) aaa authentication login default local
C) aaa authentication login default group radius
D) aaa authentication login default none
Answer: A
Explanation:
AAA authentication requires redundancy and fallback mechanisms. AAA authentication login default group radius local provides hierarchical authentication checking RADIUS servers first, falling back to local database if RADIUS unavailable, ensuring authentication services remain available, preventing lockout during server failures, and representing best practice for resilient authentication.
Method lists define ordered sequence of authentication methods, attempting each in order until success or all fail, providing redundancy and flexibility, enabling centralized and local authentication, and ensuring service availability.
Authentication sequence shows user attempting login, device querying RADIUS servers in group, falling back to local database if RADIUS timeout or unreachable, checking local credentials, and allowing or denying access based on results.
RADIUS advantages include centralized user management, comprehensive accounting, policy enforcement, integration with enterprise identity systems, and administrative efficiency.
Local fallback benefits provide authentication during RADIUS outage, prevent complete lockout, enable emergency access, maintain basic network access, and ensure operational continuity.
Configuration components include defining RADIUS server groups, configuring RADIUS servers with keys, creating method list with proper order, applying to lines or interfaces, and configuring local user accounts.
Failover behavior shows RADIUS timeout triggering fallback, unreachable servers bypassed, local database checked next, immediate authentication from local if RADIUS unavailable, and transparent user experience.
Security considerations require maintaining strong local passwords, limiting local accounts, documenting emergency procedures, regularly testing failover, and monitoring authentication patterns.
Best practices recommend configuring RADIUS with local fallback, implementing redundant RADIUS servers, maintaining current local credentials, testing failover scenarios regularly, monitoring authentication logs, and documenting AAA design.
Why other options are incorrect:
B) Local only authentication lacks centralization, requires maintaining passwords on all devices, no accounting benefits, scalability issues, and misses enterprise authentication advantages.
C) RADIUS only without fallback creates single point of failure, complete lockout if RADIUS unavailable, no emergency access, and poor availability design.
D) None authentication allows access without credentials, completely insecure, eliminates authentication, creates massive security vulnerability, and never appropriate for production.
Question 56
A network team needs to implement First Hop Redundancy Protocol that supports load balancing across multiple gateways. Which protocol should be configured?
A) HSRP
B) VRRP
C) GLBP
D) STP
Answer: C
Explanation:
Gateway redundancy with load distribution requires specific protocol capabilities. GLBP (Gateway Load Balancing Protocol) uniquely provides active-active gateway redundancy where multiple routers simultaneously forward traffic, distributes client load across gateways, maintains redundancy while optimizing bandwidth, eliminates idle backup routers, and represents advanced Cisco solution combining redundancy with load balancing.
GLBP enables up to four routers forwarding traffic simultaneously, responding to ARP requests with different virtual MAC addresses, distributing forwarding responsibility, maintaining seamless failover, and maximizing infrastructure utilization.
GLBP roles include AVG (Active Virtual Gateway) managing the group and assigning virtual MACs, AVFs (Active Virtual Forwarders) forwarding traffic with unique virtual MACs, standby routers ready to take over, and all routers potentially participating in forwarding.
Load balancing methods show round-robin distributing ARP responses sequentially, weighted balancing based on configured capacity, host-dependent maintaining consistent client-to-gateway mapping, and administrator-configured algorithms optimizing distribution.
Advantages over alternatives demonstrate all routers actively forwarding, no idle capacity waste, better bandwidth utilization, automatic load distribution, seamless addition of forwarders, and improved resource efficiency.
Redundancy mechanisms maintain availability if AVF fails, promote standby to active automatically, redistribute load across remaining forwarders, continue operation transparently, and provide robust fault tolerance.
Configuration elements include defining GLBP group number, configuring virtual IP address, setting priority for AVG election, selecting load balancing method, configuring preemption if desired, and enabling on interfaces.
Use cases include environments needing traffic distribution, multiple equal-cost Internet connections, maximizing gateway capacity, preventing bottlenecks, and optimizing infrastructure investment.
Best practices recommend implementing GLBP when load distribution needed, configuring appropriate priorities, selecting suitable balancing method, monitoring forwarder distribution, testing failover behavior, and documenting configuration.
Why other options are incorrect:
A) HSRP provides active-standby redundancy only, single router forwards while others standby, no load balancing, wastes standby capacity, though provides solid redundancy.
B) VRRP also active-standby model, one master forwards while backups idle, industry standard but no load balancing, and similar limitations to HSRP.
D) STP prevents Layer 2 loops, not gateway redundancy protocol, provides path redundancy for switches, doesn’t handle gateway services, and completely different purpose.
Question 57
An engineer must configure a switch to prevent BPDU Guard violations from causing network-wide disruption. Which feature should be enabled?
A) PortFast
B) BPDU Guard
C) Root Guard
D) BPDU Filter
Answer: B
Explanation:
Spanning tree security requires protection against unauthorized switches. BPDU Guard prevents network disruption by immediately error-disabling ports receiving BPDUs, protecting against rogue switches, preventing topology changes from access ports, enforcing access port policy, and providing critical spanning tree security.
BPDU Guard typically configured on PortFast-enabled ports connecting end devices, expects no BPDUs on these ports, disables port immediately upon BPDU reception, requires manual recovery, and indicates security policy violation or misconfiguration.
Protection mechanism shows BPDU received on guarded port, switch immediately transitioning port to err-disabled state, blocking all traffic, generating syslog messages, requiring administrative intervention, and preventing spanning tree manipulation.
Use cases include access ports connecting end devices, preventing users from connecting switches, protecting against accidental switch connections, enforcing network policies, and maintaining spanning tree stability.
PortFast relationship shows BPDU Guard typically enabled with PortFast, PortFast bypassing listening and learning states, combination providing both fast convergence and security, and protecting against spanning tree attacks.
Configuration methods include per-interface using spanning-tree bpduguard enable, globally for all PortFast ports with spanning-tree portfast bpduguard default, and verifying with show spanning-tree summary.
Recovery process requires identifying violation cause, removing offending device, investigating incident, administratively recovering port with shutdown then no shutdown, or enabling automatic recovery with errdisable recovery.
Global vs interface shows global applying to all PortFast ports automatically, interface configuration applying specifically, and both methods providing protection depending on design.
Best practices recommend enabling BPDU Guard on all access ports, using global configuration for consistency, implementing errdisable recovery for automation, monitoring violations, documenting security policy, and training staff.
Why other options are incorrect:
A) PortFast allows immediate transition to forwarding, doesn’t protect against BPDUs, should be combined with BPDU Guard for security, and serves different purpose.
C) Root Guard prevents inferior BPDUs from becoming root, protects root bridge election, different spanning tree protection, doesn’t disable ports receiving BPDUs, and addresses different security concern.
D) BPDU Filter blocks BPDUs completely, prevents sending and receiving, different approach, can create loops if misconfigured, and generally not recommended except specific scenarios.
Question 58
A company needs to implement network monitoring to collect interface statistics. Which protocol should be configured for polling device metrics?
A) NetFlow
B) SNMP
C) Syslog
D) RMON
Answer: B
Explanation:
Network monitoring and management require standardized protocols. SNMP (Simple Network Management Protocol) provides polling-based monitoring to collect device statistics, retrieves interface metrics, system information, and performance data, uses manager-agent model, supports comprehensive MIB access, and represents standard protocol for network monitoring.
SNMP enables management stations polling network devices, retrieving information from Management Information Bases (MIBs), collecting interface statistics, monitoring device health, and enabling proactive management.
SNMP operations include GET retrieving specific MIB objects, GET-NEXT traversing MIB trees, GET-BULK retrieving multiple objects efficiently, SET modifying device configuration, and TRAP sending asynchronous notifications.
Interface statistics accessible include interface counters (packets, bytes, errors), utilization metrics, operational status, speed and duplex, error rates, and comprehensive performance data.
Polling mechanisms show management station sending GET requests periodically, device responding with requested values, collecting metrics over time, storing in monitoring databases, and analyzing trends.
SNMP versions include SNMPv1 with basic functionality, SNMPv2c adding bulk operations, and SNMPv3 providing encryption and authentication for security.
Configuration requirements involve enabling SNMP on devices, configuring community strings or users, defining access permissions, setting trap destinations if needed, and specifying accessible MIB objects.
Use cases include monitoring interface utilization, tracking error rates, collecting performance metrics, capacity planning, troubleshooting network issues, and maintaining SLA compliance.
Best practices recommend using SNMPv3 for security, implementing read-only access when possible, limiting SNMP access with ACLs, monitoring critical metrics, maintaining consistent configuration, and integrating with monitoring platforms.
Why other options are incorrect:
A) NetFlow exports traffic flow data, analyzes traffic patterns, provides detailed flow information, but doesn’t poll device statistics, and serves traffic analysis purpose.
C) Syslog collects log messages, event-based not polling, provides alerts and notifications, doesn’t retrieve statistics on demand, and different monitoring approach.
D) RMON (Remote Monitoring) is SNMP-based monitoring extension, provides enhanced statistics, but SNMP is core protocol, and RMON builds on SNMP foundation.
Question 59
An administrator must configure BGP to prevent announcing customer routes to other customers. Which BGP feature should be implemented?
A) Route reflector
B) AS-PATH filtering
C) Community filtering
D) Private AS removal
Answer: B
Explanation:
BGP routing policy control requires filtering mechanisms. AS-PATH filtering prevents announcing customer routes to other customers by examining AS-PATH attribute, implementing access lists matching path patterns, controlling route advertisement, enforcing routing policies, preventing transit between customers, and maintaining proper routing relationships.
AS-PATH filtering examines autonomous system path in route advertisements, matches specific AS numbers or patterns, applies permit or deny actions, controls route propagation, and enables fine-grained routing policy.
Service provider scenarios show customer A routes shouldn’t transit to customer B, filtering prevents inappropriate announcements, maintains customer isolation, implements proper peering policies, and protects routing integrity.
AS-PATH regular expressions match path patterns, use metacharacters for flexibility, identify specific AS numbers, detect path sequences, and enable sophisticated filtering.
Filter placement requires applying outbound to prevent announcements, configuring on customer-facing interfaces, implementing consistent policy, testing thoroughly, and documenting routing policy.
Policy implementation includes creating AS-PATH access lists, defining permit/deny rules, applying to BGP neighbors with filter-list, verifying advertisement control, and maintaining policy documentation.
Common patterns show filtering customers from each other, allowing provider routes, controlling transit, preventing route leaks, and implementing security policies.
Verification uses show ip bgp command checking announced routes, verifying filters applied, confirming policy effectiveness, monitoring route advertisements, and validating customer isolation.
Best practices recommend implementing strict filtering, documenting routing policies, using communities as alternative, testing filter effectiveness, monitoring route advertisements, and maintaining policy consistency.
Why other options are incorrect:
A) Route reflector addresses iBGP scaling, reflects routes within AS, doesn’t control customer route advertisement, and serves different architectural purpose.
C) Community filtering uses BGP communities for policy, alternative approach to AS-PATH, valid option, but AS-PATH filtering more directly addresses AS-based filtering requirement.
D) Private AS removal strips private AS numbers, different purpose than preventing customer-to-customer advertisement, and doesn’t provide route filtering needed.
Question 60
A network team needs to implement secure remote access for administrators. Which protocol provides encrypted remote CLI access?
A) Telnet
B) SSH
C) HTTP
D) SNMP
Answer: B
Explanation:
Secure remote management requires encrypted communication. SSH (Secure Shell) provides encrypted remote command-line access, protects credentials and session data, prevents eavesdropping and man-in-the-middle attacks, uses strong encryption algorithms, supports public-key authentication, and represents security best practice for remote device management.
SSH encrypts all communication including authentication credentials, session data, and commands, using algorithms like AES for confidentiality, HMAC for integrity, and supporting both password and key-based authentication.
Security features include strong encryption protecting data confidentiality, authentication preventing unauthorized access, integrity checking detecting tampering, server verification through host keys, and protection against various attacks.
SSH versions show SSHv1 deprecated due to vulnerabilities, SSHv2 providing improved security, current standard support, enhanced encryption, and better authentication mechanisms.
Configuration requirements involve generating RSA keys, enabling SSH server, configuring VTY lines for SSH, setting authentication methods, defining access lists, and disabling Telnet.
Authentication methods include password-based using local or AAA, public-key authentication with key pairs, and certificate-based for enhanced security.
Key generation creates RSA or DSA key pairs, determines key length (recommend 2048+ bits), enables SSH functionality, secures server identity, and provides cryptographic foundation.
Access control implements ACLs restricting SSH access, limits source addresses, uses VTY line configuration, implements timeout values, and maintains security policies.
Best practices recommend using SSH exclusively, disabling Telnet, implementing strong passwords, using key-based authentication, restricting access with ACLs, monitoring login attempts, and maintaining software updates.
Why other options are incorrect:
A) Telnet transmits everything in cleartext, exposes passwords, vulnerable to eavesdropping, completely insecure, and should be disabled in production.
C) HTTP is web protocol, not CLI access, also unencrypted by default, HTTPS provides encryption but not for CLI, and different management approach.
D) SNMP is monitoring protocol, not for interactive CLI access, SNMPv3 provides encryption, but serves monitoring not remote shell purpose.