Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 1
A network engineer is configuring OSPF on a Cisco router and needs to ensure that a specific interface does not form OSPF adjacencies but still advertises its connected network. Which OSPF interface configuration accomplishes this requirement?
A) Configure the interface as passive using the passive-interface command
B) Configure OSPF priority to 0 on the interface
C) Disable OSPF completely on the interface
D) Set the OSPF cost to maximum value
Answer: A
Explanation:
OSPF operates by forming adjacencies with neighboring routers to exchange link-state information and build the topology database. However, certain interfaces like those connected to end-user networks do not need to form adjacencies because no OSPF routers exist on those segments.
When an interface is configured as passive in OSPF using the passive-interface command, the router continues to advertise the connected network prefix into OSPF, allowing other routers to learn routes to that network. However, the router stops sending OSPF hello packets out that interface and will not form adjacencies with any potential neighbors on that segment. This configuration provides several benefits including reduced OSPF protocol overhead by eliminating unnecessary hello packets, enhanced security by preventing unauthorized routers from forming adjacencies on access networks, and simplified troubleshooting. The passive-interface command is typically applied to interfaces connecting to user VLANs, stub networks, or any segment where OSPF adjacencies are unnecessary.
B is incorrect because setting OSPF priority to 0 affects designated router election on broadcast networks, preventing the router from becoming DR or BDR. However, the interface still sends hello packets and forms adjacencies normally. Priority 0 does not prevent adjacency formation.
C is incorrect because disabling OSPF completely on the interface prevents both adjacency formation and network advertisement. When OSPF is disabled, the connected network prefix is not advertised into OSPF, meaning other routers cannot learn routes to reach that network, breaking connectivity.
D is incorrect because setting OSPF cost to maximum affects path selection metrics but does not prevent adjacency formation. The interface continues sending hello packets and maintaining neighbor relationships. High cost makes the path less preferred but does not eliminate adjacencies or reduce protocol overhead.
Question 2
An enterprise network uses HSRP for gateway redundancy. The network team wants to ensure that a specific router always becomes the active router when it is operational. Which HSRP parameter should be configured?
A) Configure higher HSRP priority on the preferred router
B) Configure lower HSRP priority on the preferred router
C) Disable preemption on all routers
D) Configure identical priorities on both routers
Answer: A
Explanation:
HSRP provides first-hop redundancy by creating a virtual router with a virtual IP address and MAC address shared between multiple physical routers. The active router handles traffic forwarding while standby routers monitor the active router’s health, ready to assume the active role if failure occurs.
HSRP uses priority values ranging from 0 to 255 with a default of 100 to determine which router becomes active. The router with the highest priority becomes the active router during election. To ensure a specific router consistently becomes active when operational, administrators configure that router with a higher priority value than other HSRP group members. For example, configuring Router A with priority 110 and Router B with priority 100 ensures Router A becomes active. The priority value can be adjusted based on interface states using tracking, allowing automatic failover when monitored interfaces fail. Preemption must also be enabled for the higher-priority router to reclaim the active role after recovery from failure. Without preemption, the current active router continues operating even when a higher-priority router becomes available. The combination of higher priority and preemption ensures deterministic active router selection, allowing administrators to control which router handles traffic under normal conditions while maintaining automatic failover capability.
B is incorrect because configuring lower priority makes the router less likely to become active. HSRP election selects the highest priority router, so lower priority values decrease the likelihood of becoming active. This configuration would have the opposite effect of what is intended.
C is incorrect because disabling preemption prevents higher-priority routers from reclaiming the active role after the current active router assumes the role. Without preemption, the first router to become active remains active regardless of priority, eliminating deterministic control over which router is active.
D is incorrect because identical priorities result in tie-breaking using IP addresses, where the highest IP address becomes active. This creates unpredictable behavior based on addressing rather than administrative preference, providing no control over which router becomes active.
Question 3
A network administrator needs to configure EtherChannel between two Cisco switches to aggregate bandwidth and provide link redundancy. Which protocol should be used for dynamic EtherChannel negotiation with Cisco proprietary features?
A) PAgP (Port Aggregation Protocol)
B) LACP (Link Aggregation Control Protocol)
C) LLDP (Link Layer Discovery Protocol)
D) CDP (Cisco Discovery Protocol)
Answer: A
Explanation:
EtherChannel aggregates multiple physical links into a single logical interface, providing increased bandwidth and redundancy. Multiple protocols can establish EtherChannels with different negotiation mechanisms and feature support.
PAgP is Cisco’s proprietary EtherChannel protocol that dynamically negotiates channel formation between switches. PAgP operates in several modes including auto where the interface responds to PAgP packets but does not initiate negotiation, desirable where the interface actively initiates negotiation, and on mode which forces the channel without negotiation. PAgP provides Cisco-specific features and tight integration with Cisco switch platforms. The protocol exchanges packets between switches to verify configuration compatibility including speed, duplex, VLAN assignments, and trunking mode. When compatible configurations are detected, PAgP forms the EtherChannel automatically. PAgP also monitors the channel for failures and can suspend ports if misconfiguration is detected, providing built-in error detection. While PAgP offers robust functionality, it only works between Cisco devices due to its proprietary nature.
B is incorrect because LACP is an IEEE 802.3ad standard protocol for EtherChannel negotiation, not Cisco proprietary. While LACP works with Cisco switches and provides excellent multi-vendor interoperability, the question specifically asks for Cisco proprietary features. LACP is the preferred choice for multi-vendor environments.
C is incorrect because LLDP is a neighbor discovery protocol used for identifying connected devices and their capabilities, not for EtherChannel negotiation. LLDP helps administrators understand network topology but does not aggregate links or provide bandwidth increase.
D is incorrect because CDP is Cisco’s proprietary neighbor discovery protocol that collects information about directly connected Cisco devices. Like LLDP, CDP discovers neighbors but does not create EtherChannels or aggregate bandwidth. Discovery protocols and link aggregation protocols serve different purposes.
Question 4
An organization implements VLANs to segment network traffic. A network engineer needs to configure a switch port to carry traffic for multiple VLANs simultaneously. Which port mode should be configured?
A) Configure the port as a trunk port
B) Configure the port as an access port
C) Configure the port in dynamic desirable mode
D) Disable the port completely
Answer: A
Explanation:
VLANs provide Layer 2 network segmentation by creating separate broadcast domains within a single physical switch infrastructure. Different port modes determine how VLANs are assigned and how traffic is handled on switch interfaces.
Trunk ports carry traffic for multiple VLANs simultaneously by using VLAN tagging protocols, primarily 802.1Q in modern networks. When frames traverse a trunk link, they are tagged with VLAN identifiers allowing receiving switches to determine which VLAN each frame belongs to. Trunk ports are essential for connecting switches together, linking switches to routers for inter-VLAN routing, and connecting to servers that need access to multiple VLANs. The 802.1Q standard inserts a 4-byte tag into Ethernet frames containing the VLAN ID and priority information. One VLAN is designated as the native VLAN, which transmits untagged across the trunk for backward compatibility, though this practice is often avoided for security reasons. Trunk configuration includes specifying which VLANs are allowed on the trunk, with the default allowing all VLANs. Administrators typically prune unnecessary VLANs from trunks to reduce broadcast traffic and enhance security. DTP can dynamically negotiate trunking, though manual configuration is preferred for security and predictability.
B is incorrect because access ports belong to a single VLAN and carry traffic for only that VLAN without tagging. Access ports connect to end devices like computers, phones, or printers. When devices send frames to access ports, the switch internally associates those frames with the configured VLAN. Access ports cannot carry multiple VLANs.
C is incorrect because dynamic desirable mode is a DTP setting that attempts to negotiate trunking but is not a configuration that inherently carries multiple VLANs. DTP modes determine whether trunking is negotiated, but the actual trunk configuration is what enables multi-VLAN transport. Additionally, DTP is often disabled for security.
D is incorrect because disabling the port completely prevents any traffic flow and does not configure multi-VLAN support. Disabled ports are administratively shut down and forward no traffic regardless of VLAN configuration.
Question 5
A company’s network experiences spanning tree topology changes causing temporary connectivity disruptions. Which Spanning Tree Protocol feature helps reduce convergence time by bypassing listening and learning states on specific ports?
A) PortFast on access ports connecting to end devices
B) Increasing bridge priority on all switches
C) Disabling STP completely on all switches
D) Configuring all ports as trunk ports
Answer: A
Explanation:
Spanning Tree Protocol prevents Layer 2 loops by blocking redundant paths while maintaining network connectivity. Standard STP convergence involves multiple states that ports transition through, causing delays when topology changes occur.
PortFast is an STP enhancement that allows access ports to bypass the listening and learning states, transitioning directly from blocking to forwarding when the link comes up. Normal STP ports spend 15 seconds in listening state and 15 seconds in learning state before reaching forwarding state, creating a 30-second delay during link initialization. For end devices like workstations or servers, this delay is unnecessary because these devices do not create loops. PortFast should only be enabled on ports connecting to end devices, never on ports connecting to switches or bridges, because enabling PortFast on inter-switch links could allow temporary loops during topology changes. When properly deployed, PortFast significantly improves end-user experience by providing immediate network connectivity when devices boot or cables are connected. PortFast also benefits DHCP by ensuring ports are forwarding before DHCP discover packets are sent, preventing DHCP timeouts. BPDU Guard is typically configured alongside PortFast to protect against accidental or malicious connection of switches to PortFast-enabled ports. If BPDUs are received on a PortFast port with BPDU Guard enabled, the port is automatically disabled, preventing loop formation.
B is incorrect because bridge priority determines which switch becomes the root bridge in STP topology but does not affect convergence time or state transitions. Priority values influence root bridge election during initial convergence or after topology changes but do not reduce the time ports spend in listening and learning states.
C is incorrect because disabling STP completely removes loop prevention protection, creating significant risk of broadcast storms and network instability if redundant physical paths exist. Switching loops cause network-wide failures with CPU exhaustion, frame duplication, and MAC table instability. STP should never be disabled in networks with redundancy.
D is incorrect because configuring ports as trunks changes VLAN handling but does not affect STP convergence time. Trunk ports still participate in STP and transition through the same states as access ports unless PortFast or similar features are configured. Trunk configuration is independent of STP convergence optimization.
Question 6
A network engineer needs to configure inter-VLAN routing to allow communication between different VLANs in the network. Which device or configuration method enables inter-VLAN routing?
A) Layer 3 switch with SVI configuration or router with subinterfaces
B) Layer 2 switch without routing capabilities
C) Hub connecting all VLANs
D) Unmanaged switch
Answer: A
Explanation:
VLANs create separate Layer 2 broadcast domains that logically segment networks. By design, VLANs isolate traffic so devices in different VLANs cannot communicate directly. Inter-VLAN routing requires Layer 3 functionality to forward traffic between VLANs.
Layer 3 switches and routers provide inter-VLAN routing through different implementation methods. Layer 3 switches use Switch Virtual Interfaces, which are logical interfaces representing VLANs that can be assigned IP addresses and participate in routing. When SVIs for multiple VLANs exist with IP routing enabled, the Layer 3 switch routes packets between VLANs at wire speed using hardware ASICs. For example, creating SVI for VLAN 10 with IP 192.168.10.1 and SVI for VLAN 20 with IP 192.168.20.1 enables hosts in these VLANs to communicate through the switch’s routing function. The router-on-a-stick method uses a router with 802.1Q subinterfaces where each subinterface associates with a specific VLAN and receives tagged traffic from a trunk link connecting the router to a switch. The router’s subinterfaces serve as default gateways for their respective VLANs, and the router forwards packets between subinterfaces to enable inter-VLAN communication. Layer 3 switches are preferred for performance, while router-on-a-stick is used in smaller networks or where Layer 3 switches are unavailable.
B is incorrect because Layer 2 switches operate only at the data link layer without routing capabilities. Layer 2 switches forward frames based on MAC addresses within VLANs but cannot route packets between different IP subnets or VLANs. Inter-VLAN routing requires Layer 3 intelligence that Layer 2 switches lack.
C is incorrect because hubs are Layer 1 devices that simply repeat electrical signals to all ports without any understanding of VLANs, MAC addresses, or IP routing. Hubs cannot provide VLAN segmentation or routing functionality. Modern networks have replaced hubs with switches.
D is incorrect because unmanaged switches provide basic Layer 2 switching without configuration options, VLAN support, or routing capabilities. Unmanaged switches forward all traffic as a single broadcast domain without VLAN awareness or ability to route between networks.
Question 7
An organization wants to implement network automation using model-driven programmability. Which data format is commonly used by YANG data models for network device configuration and state data?
A) XML or JSON encoding formats
B) Binary executable files
C) Proprietary encrypted formats
D) Plain text without structure
Answer: A
Explanation:
Network automation and programmability rely on structured data representations that both humans and machines can process consistently. YANG models define the structure and semantics of network configuration and operational state data using a standardized modeling language.
YANG models are typically encoded using XML or JSON formats when transmitted between network devices and management systems. XML uses nested tags to represent hierarchical data structures, providing explicit structure with opening and closing tags that clearly delineate data elements. JSON uses key-value pairs and arrays with lighter syntax, making it more human-readable and efficient for network transmission. Both formats support the hierarchical structure required for complex network configurations. NETCONF protocol traditionally uses XML encoding for YANG data, while RESTCONF supports both XML and JSON, with JSON being more common due to its simplicity and widespread adoption in REST APIs. These structured formats enable programmatic access to configuration where automation tools can parse the data, modify specific elements, and generate valid configurations. Structured data also facilitates validation against YANG schemas, ensuring configuration consistency before deployment. APIs using YANG models with XML or JSON encoding provide vendor-neutral methods for network automation, allowing unified management across multi-vendor environments.
B is incorrect because binary executable files are compiled programs containing machine code, not data representation formats for configuration. Binary formats are not human-readable and do not support the structured, hierarchical data representation that YANG models require. Configuration management needs text-based structured formats.
C is incorrect because proprietary encrypted formats would prevent interoperability and vendor-neutral automation, defeating the purpose of standardized YANG models. While encryption can protect data in transit, YANG encoding itself uses open standard formats that tools can process regardless of vendor implementation.
D is incorrect because plain text without structure lacks the hierarchical organization and explicit delimiters necessary for representing complex network configurations. Unstructured text cannot be reliably parsed by automation tools and does not support validation against schemas or programmatic manipulation of specific configuration elements.
Question 8
A network administrator needs to configure quality of service to prioritize voice traffic over data traffic. Which QoS mechanism classifies and marks traffic near the network edge?
A) Classification and marking using DSCP or CoS values
B) Dropping all traffic randomly
C) Forwarding all traffic with equal priority
D) Disabling QoS completely
Answer: A
Explanation:
Quality of Service mechanisms ensure that critical applications receive appropriate network resources including bandwidth, low latency, and minimal packet loss. QoS implementation follows a trust boundary model where traffic is classified and marked as it enters the network.
Classification and marking identify traffic types and apply priority indicators that downstream devices use for QoS treatment. DSCP uses 6 bits in the IP header’s ToS field to mark packets with up to 64 different values indicating priority and drop preference. Common DSCP values include EF for voice traffic requiring low latency, AF classes for differentiated service levels, and BE for best-effort traffic. CoS uses 3 bits in the 802.1Q VLAN tag at Layer 2, providing 8 priority levels. Traffic classification near the network edge, typically at access switches or edge routers, uses various criteria including source/destination IP addresses, port numbers, protocols, or application signatures to identify traffic types. Once classified, appropriate DSCP or CoS values are marked on packets. Downstream routers and switches trust these markings and apply configured QoS policies including prioritized queuing, bandwidth allocation, and preferential forwarding. Edge classification is preferred because it provides consistent markings throughout the network and prevents end devices from manipulating QoS markings inappropriately.
B is incorrect because dropping all traffic randomly would destroy network functionality rather than providing quality of service. Random dropping might be a congestion management technique like RED for specific queues, but it is not a classification or marking mechanism and would not prioritize voice traffic.
C is incorrect because forwarding all traffic with equal priority provides no quality of service differentiation. Without prioritization, voice traffic receives the same treatment as bulk data transfers, potentially resulting in unacceptable latency, jitter, and packet loss that degrade voice quality during congestion.
D is incorrect because disabling QoS eliminates traffic prioritization, causing all traffic to receive identical treatment. Without QoS, real-time applications like voice and video compete equally with bandwidth-intensive applications like backups and downloads, resulting in poor user experience for latency-sensitive applications during congestion.
Question 9
An enterprise network uses EIGRP for dynamic routing. The network engineer needs to configure EIGRP to advertise a default route to all EIGRP neighbors. Which configuration accomplishes this?
A) Configure ip default-network or redistribute static default route into EIGRP
B) Disable EIGRP on all interfaces
C) Set EIGRP administrative distance to 255
D) Remove all EIGRP neighbor relationships
Answer: A
Explanation:
Dynamic routing protocols learn and distribute network reachability information throughout the network. Default routes provide a gateway of last resort for traffic destined to networks not explicitly known in the routing table, typically pointing toward internet connectivity.
EIGRP supports multiple methods for advertising default routes to neighbors. The ip default-network command designates a classful network as the default network, which EIGRP then advertises to neighbors. This method requires the network to exist in the routing table and be advertised through EIGRP. The more flexible approach creates a static default route and redistributes it into EIGRP using the redistribute static command with network 0.0.0.0 0.0.0.0 configured under the EIGRP process. This causes EIGRP to advertise the default route to all neighbors. The summary-address command on interfaces can also advertise a default route by summarizing all networks to 0.0.0.0/0. Default route advertisement is typically configured on edge routers connecting to internet or core routers aggregating routes from distribution layers. Neighbors receiving the default route install it in their routing tables with appropriate metric and administrative distance, using it for traffic to unknown destinations. Default route advertisement simplifies routing tables in stub networks by eliminating the need to advertise all specific routes.
B is incorrect because disabling EIGRP on all interfaces prevents any EIGRP operation including neighbor relationships, route learning, and route advertisement. Disabling EIGRP eliminates dynamic routing completely rather than advertising a default route. EIGRP must be operational to exchange routing information.
C is incorrect because setting EIGRP administrative distance to 255 marks EIGRP routes as untrusted, preventing their installation in the routing table. Routes with AD 255 are rejected and not used for forwarding. This configuration would prevent all EIGRP routes from being used, not advertise a default route.
D is incorrect because removing all EIGRP neighbor relationships prevents EIGRP from functioning and exchanging routing information. Without neighbor relationships, EIGRP cannot advertise any routes including default routes. Functional neighbor relationships are prerequisites for route advertisement.
Question 10
A network administrator configures NAT on a router to allow internal private IP addresses to access the internet using a pool of public IP addresses. Which NAT type provides this functionality?
A) Dynamic NAT with address pool
B) Static NAT one-to-one mapping for all addresses
C) NAT disabled completely
D) Port forwarding for inbound connections only
Answer: A
Explanation:
Network Address Translation enables private IP addresses to communicate across the internet by translating them to public routable addresses. Different NAT types provide various translation methods for different use cases.
Dynamic NAT uses a pool of public IP addresses to translate multiple internal private addresses. When an internal host initiates an outbound connection, the router dynamically allocates an available public IP from the configured pool for the translation. The mapping between private and public addresses is temporary, existing only for the duration of the session. When the session ends, the public IP returns to the pool for reuse by other hosts. Dynamic NAT configuration requires defining an inside source access list specifying which internal addresses are eligible for translation, creating a pool of public IP addresses available for allocation, and associating the access list with the pool using NAT configuration commands. Dynamic NAT provides security by hiding internal network structure and conserves public IP addresses compared to giving each internal host a static public address. However, dynamic NAT requires sufficient public IPs in the pool to support simultaneous outbound connections. If the pool exhausts, additional connections fail until addresses become available. Dynamic NAT is appropriate for organizations with limited public IP address allocations that need to provide internet access to larger numbers of internal hosts.
B is incorrect because static NAT creates permanent one-to-one mappings between specific private and public IP addresses. Static NAT is used for servers that need consistent public addresses for inbound connections. Mapping all addresses statically would require a public IP for every internal host, which defeats the purpose of address conservation and is impractical for most organizations.
C is incorrect because disabling NAT prevents internal private addresses from accessing the internet entirely, as private address ranges are not routable on the public internet. Without NAT, routers would drop packets with private source addresses, preventing any internet connectivity for internal networks using RFC 1918 addressing.
D is incorrect because port forwarding or static NAT port mapping translates specific ports on public addresses to internal servers, typically for inbound connections. Port forwarding allows external users to access internal services but does not provide outbound NAT for internal clients accessing the internet.
Question 11
An organization implements wireless networking using Cisco access points and wireless LAN controllers. Which protocol is used for communication between access points and the wireless controller?
A) CAPWAP (Control and Provisioning of Wireless Access Points)
B) HTTP (Hypertext Transfer Protocol)
C) FTP (File Transfer Protocol)
D) TFTP (Trivial File Transfer Protocol)
Answer: A
Explanation:
Modern enterprise wireless architectures separate control plane functions from data plane functions using wireless LAN controllers that centrally manage access points. Standardized protocols enable communication between access points and controllers.
CAPWAP is the industry-standard protocol defined in RFC 5415 that enables communication between lightweight access points and wireless LAN controllers. CAPWAP creates tunnels between APs and controllers for both control traffic and data traffic. The control tunnel carries management frames, configuration information, AP firmware updates, and statistics using CAPWAP control messages over UDP port 5246. The data tunnel forwards client data traffic between APs and controllers using CAPWAP data messages over UDP port 5247. CAPWAP provides DTLS encryption for secure communication protecting configuration and management traffic. Access points discover controllers using DHCP option 43, DNS resolution, or local subnet broadcast, then establish CAPWAP tunnels for centralized management. Controllers use CAPWAP to configure AP radio parameters, push firmware updates, collect statistics, and coordinate roaming. CAPWAP replaced Cisco’s proprietary LWAPP protocol, providing vendor interoperability for wireless infrastructure management. The protocol supports both local mode where controllers switch client data and FlexConnect mode where APs locally switch traffic.
B is incorrect because HTTP is a web application protocol for transferring hypertext documents between browsers and servers. While HTTP might be used for accessing controller management interfaces, it is not the protocol for control plane communication between APs and controllers. HTTP operates at the application layer for different purposes.
C is incorrect because FTP is a file transfer protocol for uploading and downloading files between clients and servers. FTP is not designed for control plane communication or real-time management of network devices. While FTP might transfer firmware files, it is not the operational protocol for AP-controller communication.
D is incorrect because TFTP is a simplified file transfer protocol used primarily for transferring configuration files and firmware images during device boot processes. TFTP does not provide the rich control plane functionality, encapsulation, or security features required for comprehensive AP management and client data forwarding.
Question 12
A network engineer troubleshoots connectivity issues and needs to verify which route a router will use for a specific destination. Which command displays the routing table entry for a particular destination IP address?
A) show ip route [destination-ip] or show ip route summary
B) show interfaces status
C) show vlan brief
D) show mac address-table
Answer: A
Explanation:
Routing troubleshooting requires understanding how routers make forwarding decisions based on routing table entries. Different show commands provide various perspectives on network device operation and configuration.
The show ip route command displays the IP routing table containing all known routes including directly connected networks, static routes, and dynamically learned routes from protocols like OSPF, EIGRP, or BGP. Adding a specific destination IP address as a parameter shows detailed information about the route the router would use for that destination, including route source, administrative distance, metric, next-hop address, outgoing interface, and age. The command performs longest prefix match, showing the most specific route matching the destination address. The output indicates route type through codes like C for connected, S for static, O for OSPF, and D for EIGRP. The show ip route summary provides statistics about routing table contents including total routes per protocol, configured VRFs, and memory usage. These commands are essential for verifying routing behavior, identifying suboptimal routing, troubleshooting reachability issues, and confirming that route advertisements from dynamic protocols are functioning correctly. When troubleshooting connectivity problems, administrators compare routing tables on both source and destination routers to verify symmetric paths exist.
B is incorrect because show interfaces status displays physical and data link layer information about interfaces including status (up/down), VLAN assignment on switches, speed, and duplex. While interface status is important for connectivity, this command does not show routing information or indicate which path will be used for specific destinations.
C is incorrect because show vlan brief displays VLAN database information on switches including VLAN IDs, names, status, and port assignments. VLAN information is relevant for Layer 2 troubleshooting but does not provide routing table information needed to determine forwarding paths for specific IP destinations.
D is incorrect because show mac address-table displays Layer 2 MAC address information on switches showing which MAC addresses are learned on which ports. MAC address tables are used for Layer 2 frame switching within VLANs but do not contain routing information for Layer 3 destination IP addresses.
Question 13
An enterprise implements access control lists to filter traffic. A network administrator needs to permit SSH traffic while denying all other traffic to a specific server. Which protocol and port should be specified in the ACL?
A) TCP port 22
B) UDP port 22
C) TCP port 23
D) UDP port 69
Answer: A
Explanation:
Access control lists filter traffic based on various criteria including source addresses, destination addresses, protocols, and port numbers. Understanding protocol and port associations is essential for creating effective security policies.
SSH provides secure remote command-line access to network devices using strong encryption, authentication, and integrity checking. SSH operates over TCP port 22, using the reliable transport characteristics of TCP for session-oriented communication. When creating an ACL to permit SSH traffic to a server, administrators specify TCP as the protocol and 22 as the destination port. The ACL entry would typically appear as “permit tcp any host [server-ip] eq 22” to allow SSH connections from any source to the specific server. SSH replaced the insecure Telnet protocol which transmitted credentials and session data in clear text. Modern security best practices mandate SSH for all remote device management. ACLs permitting SSH should be carefully scoped to allow access only from trusted management subnets rather than allowing SSH from any source, implementing defense in depth. Extended ACLs on Cisco routers can filter based on source address, destination address, protocol type, source port, and destination port, providing granular traffic control. The ACL should also include an explicit deny statement for all other traffic to enforce the requirement of blocking everything except SSH.
B is incorrect because SSH uses TCP, not UDP. While port 22 is correct, specifying UDP would not match SSH traffic. UDP is a connectionless protocol used by different applications, and there is no standard UDP service on port 22 for SSH functionality.
C is incorrect because TCP port 23 is associated with Telnet, not SSH. Telnet provides unencrypted remote access and should not be used in modern networks due to security vulnerabilities. An ACL permitting port 23 would allow Telnet rather than SSH.
D is incorrect because UDP port 69 is used by TFTP for simple file transfers. TFTP is unrelated to SSH and does not provide secure remote access functionality. An ACL specifying UDP port 69 would permit TFTP traffic, not SSH.
Question 14
A company’s network experiences broadcast storms due to switching loops. Which Layer 2 protocol should be implemented to prevent loops while providing redundancy?
A) Spanning Tree Protocol (STP) with RSTP or MSTP
B) Disabling all redundant links permanently
C) Enabling routing protocols on switches
D) Removing all switches from the network
Answer: A
Explanation:
Redundant network paths provide fault tolerance but create potential for Layer 2 loops where frames circulate indefinitely, causing broadcast storms that exhaust network and device resources.
Spanning Tree Protocol prevents loops in switched networks by placing redundant ports in blocking state while maintaining a loop-free active topology. STP uses Bridge Protocol Data Units to discover topology and elect a root bridge based on lowest bridge priority or lowest MAC address. Each non-root switch calculates the shortest path to root and determines which ports forward and which block. Port states include blocking, listening, learning, and forwarding. RSTP provides faster convergence than original STP by introducing new port roles and states, converging in seconds rather than 30-50 seconds. RSTP uses alternate and backup port roles providing immediate failover. MSTP extends RSTP by mapping multiple VLANs to spanning tree instances, enabling load balancing across redundant links where different VLANs use different paths. STP variants maintain redundancy while preventing loops by blocking redundant paths under normal conditions and automatically activating blocked ports when active paths fail. Modern networks often implement RSTP or MSTP for optimal performance. STP configuration includes setting appropriate bridge priorities to control root bridge election and tuning timers for specific environments.
B is incorrect because permanently disabling all redundant links eliminates the fault tolerance that redundancy provides. Network resilience requires redundant paths that automatically activate during failures. Disabling redundancy creates single points of failure where any link or device failure causes connectivity loss.
C is incorrect because enabling routing protocols operates at Layer 3 and does not prevent Layer 2 switching loops. Routing and switching operate at different OSI layers with different loop prevention mechanisms. Routers use TTL to prevent routing loops, but this does not address Layer 2 frame forwarding loops in switched networks.
D is incorrect because removing all switches eliminates the entire switched infrastructure, making the network non-functional. Switches are essential for modern Ethernet networks providing high-speed frame forwarding and VLAN segmentation. The solution is implementing proper loop prevention, not eliminating switches.
Question 180
A network administrator needs to implement a solution that provides automated network provisioning and configuration management across multiple branch offices. Which Cisco DNA Center feature should be used?
A) Network Time Travel
B) Software Image Management (SWIM)
C) Network Plug and Play (PnP)
D) Assurance Analytics
Answer: C
Explanation:
Cisco DNA Center’s Network Plug and Play (PnP) feature is specifically designed to provide automated network provisioning and configuration management across distributed locations like branch offices. This feature enables zero-touch deployment of network devices, eliminating the need for manual configuration at remote sites.
Network PnP automates the entire device onboarding process from the moment a new device is connected to the network. When a device boots up for the first time, it automatically discovers the DNA Center controller and downloads its configuration, software image, and day-zero settings. This automation significantly reduces deployment time and minimizes configuration errors that commonly occur with manual provisioning.
The PnP feature provides comprehensive automation including device discovery, software image validation and installation, configuration template application, and automated network connectivity verification. Administrators can pre-define site-specific configurations in DNA Center, and when devices are deployed at branch offices, they automatically receive the appropriate settings based on their location and role.
Network Time Travel is an assurance feature that allows administrators to review historical network states and troubleshoot past issues, not for provisioning. Software Image Management (SWIM) focuses on managing and upgrading device software images but does not provide the complete automated provisioning workflow that PnP offers. Assurance Analytics is used for network health monitoring and issue detection, providing insights into network performance rather than device provisioning.
For organizations with multiple branch offices, PnP dramatically simplifies network expansion. IT staff can ship preconfigured devices directly to branch locations where non-technical personnel can simply connect them to the network, and the devices automatically configure themselves based on predefined policies.
Question 15
An organization wants to segment its network traffic based on user identity rather than IP addresses. Which technology should be implemented?
A) VLAN Trunking Protocol (VTP)
B) TrustSec Security Group Tags (SGT)
C) Private VLANs (PVLAN)
D) Dynamic Trunking Protocol (DTP)
Answer: B
Explanation:
TrustSec Security Group Tags (SGT) is the Cisco technology designed to enable network segmentation based on user identity and device attributes rather than traditional IP address-based access control. This approach provides more flexible and scalable security policies that follow users regardless of their network location.
SGT assigns numerical tags to network traffic based on the identity of users, devices, or applications. These tags are embedded in network packets and travel with the traffic throughout the network infrastructure. Network devices can then apply security policies based on these tags, creating dynamic security zones that are independent of network topology or IP addressing schemes.
The implementation of SGT involves several components working together. The Identity Services Engine (ISE) performs authentication and authorization, assigning appropriate security group tags to users and devices based on their identity and posture. Network infrastructure devices that support TrustSec can then enforce policies by allowing or denying traffic between different security groups based on a Security Group Access Control List (SGACL).
VLAN Trunking Protocol (VTP) is used for managing VLAN configuration across multiple switches but does not provide identity-based segmentation. Private VLANs (PVLAN) offer layer 2 isolation within a VLAN but are still IP-based and do not consider user identity. Dynamic Trunking Protocol (DTP) automatically negotiates trunking between switches but has no role in identity-based segmentation.
The advantage of SGT over traditional access control methods is its ability to maintain consistent security policies as users move between network segments or connect from different locations. This identity-centric approach simplifies policy management and provides more granular control over network access based on who is accessing resources rather than where they are connecting from.
Question 16
A network engineer is configuring OSPF and needs to prevent routing updates from being sent out of specific interfaces while still allowing those interfaces to be advertised in OSPF. Which command should be used?
A) passive-interface
B) network area
C) ip ospf priority 0
D) distribute-list out
Answer: A
Explanation:
The passive-interface command is the correct solution for preventing OSPF routing updates from being sent out of specific interfaces while still allowing those interfaces and their connected networks to be advertised within the OSPF domain. This command is commonly used on interfaces connected to end-user networks or stub networks where no OSPF neighbors exist.
When an interface is configured as passive in OSPF, the router stops sending OSPF hello packets out of that interface, preventing the formation of OSPF neighbor relationships. However, the network associated with that interface remains in the OSPF routing table and continues to be advertised to other OSPF routers in the area. This behavior is ideal for situations where you want to include a network in OSPF routing but don’t need or want OSPF protocol traffic on that segment.
The passive-interface command provides several benefits including reduced overhead on interfaces that don’t need OSPF adjacencies, improved security by preventing unauthorized routers from forming OSPF relationships, and conservation of router resources by eliminating unnecessary protocol processing on stub interfaces. It’s particularly useful for interfaces connecting to user VLANs, guest networks, or any segment where only hosts reside.
The network area command is used to enable OSPF on interfaces but doesn’t control whether updates are sent. Setting ip ospf priority 0 prevents a router from becoming the designated router but doesn’t stop OSPF updates from being sent or received. A distribute-list out controls which routes are advertised but doesn’t prevent OSPF protocol packets from being transmitted.
The typical configuration involves using the passive-interface command under the OSPF routing process for each interface that should not send OSPF updates. Many administrators also use the passive-interface default command to make all interfaces passive by default, then selectively enable OSPF on required interfaces using the no passive-interface command.
Question 17
Which QoS mechanism is used to prevent a single traffic flow from consuming excessive bandwidth?
A) Traffic Shaping
B) Policing
C) Weighted Fair Queuing (WFQ)
D) Low Latency Queuing (LLQ)
Answer: B
Explanation:
Policing is the QoS mechanism specifically designed to prevent any single traffic flow from consuming excessive bandwidth by enforcing a maximum rate limit on traffic. When traffic exceeds the configured rate, policing takes immediate action by dropping or remarking the excess packets, making it an effective tool for bandwidth control and preventing network abuse.
Policing operates by monitoring the rate of traffic passing through an interface and comparing it against a configured committed information rate (CIR). Traffic within the rate limit is transmitted normally, while traffic exceeding the limit can be dropped immediately or marked with a lower priority value for potential dropping later in the network. This immediate enforcement makes policing suitable for controlling untrusted traffic sources or enforcing service level agreements.
The policing mechanism uses token bucket algorithms to measure traffic rates over time. The single-rate policer uses one token bucket to enforce a maximum rate, while dual-rate policers use two buckets to define both committed and peak information rates. This flexibility allows administrators to configure how strictly bandwidth limits are enforced and how burst traffic is handled.
Traffic shaping delays excess traffic rather than dropping it, buffering packets to smooth traffic flows over time. Weighted Fair Queuing (WFQ) is a scheduling mechanism that allocates bandwidth fairly among flows based on IP precedence or weight values. Low Latency Queuing (LLQ) provides priority queuing for delay-sensitive traffic like voice and video but doesn’t prevent individual flows from consuming excessive bandwidth.
Policing is commonly implemented at network edges where traffic enters the network, allowing providers to enforce customer bandwidth subscriptions or prevent denial-of-service attacks. The immediate drop action of policing means it can potentially cause TCP retransmissions and throughput reduction, which is why it’s typically used where strict enforcement is more important than maintaining smooth traffic flow.
Question 18
An administrator needs to configure a router to translate multiple internal IP addresses to a single public IP address. Which NAT configuration type should be used?
A) Static NAT
B) Dynamic NAT
C) Port Address Translation (PAT)
D) NAT Virtual Interface (NVI)
Answer: C
Explanation:
Port Address Translation (PAT), also known as NAT overload, is the NAT configuration type that enables multiple internal IP addresses to be translated to a single public IP address. PAT accomplishes this by using unique source port numbers to distinguish between different internal hosts, making it the most efficient form of NAT for conserving public IP addresses.
PAT works by maintaining a translation table that maps the combination of internal IP address and source port number to the public IP address with a unique port number. When an internal host initiates a connection, the router translates the private IP address to the public IP address and assigns a unique port number. Return traffic is matched to the appropriate internal host based on the destination port number in the translation table.
This technology allows thousands of internal devices to share a single public IP address simultaneously, as the 16-bit port number field provides 65,535 possible unique mappings. PAT is the most common NAT implementation in small to medium-sized networks and is essential for organizations with limited public IP address allocations. The router tracks each session independently using the combination of IP address, port number, and protocol.
Static NAT creates a permanent one-to-one mapping between a private and public IP address, consuming one public address per internal host. Dynamic NAT creates temporary one-to-one mappings from a pool of public addresses but still requires multiple public IPs. NAT Virtual Interface (NVI) is a NAT configuration method that simplifies configuration by removing the distinction between inside and outside interfaces, but it’s not a type of address translation.
PAT is particularly valuable in environments where public IP addresses are scarce or expensive. The only limitation is that some applications or protocols that embed IP address information in their payload may require additional Application Layer Gateway (ALG) support to function correctly through PAT.
Question 19
A company wants to implement a wireless solution that allows seamless roaming between access points without re-authentication. Which technology should be deployed?
A)1X with EAP-TLS
B) Pre-Shared Key (PSK)
C) Fast Secure Roaming (802.11r)
D) MAC Address Filtering
Answer: C
Explanation:
Fast Secure Roaming, defined in the 802.11r standard, is the technology specifically designed to enable seamless roaming between wireless access points without requiring full re-authentication. This standard dramatically reduces roaming time from hundreds of milliseconds to under 50 milliseconds, making it essential for delay-sensitive applications like voice and video.
The 802.11r standard achieves fast roaming by using a mechanism called Fast BSS Transition (FT). Instead of performing a complete 802.1X authentication each time a client moves to a new access point, FT establishes security associations with nearby access points before the client actually roams. This pre-authentication process allows the client to quickly transition to a new AP by using cached security credentials rather than starting the authentication process from scratch.
The technology works by creating a mobility domain, which is a group of access points that share security context information. When a client first authenticates to the network using 802.1X, the wireless controller distributes pairwise master keys to all APs in the mobility domain. When the client roams to a new AP within the same mobility domain, it uses a four-way handshake to quickly establish encryption keys without contacting the authentication server again.
802.1X with EAP-TLS provides strong authentication but requires full re-authentication during roaming, causing delays. Pre-Shared Key (PSK) is a simple authentication method that doesn’t address roaming optimization. MAC Address Filtering provides basic access control but offers no roaming optimization and is easily circumvented, making it inadequate for enterprise security requirements.
For organizations supporting real-time applications like VoIP or video conferencing over wireless networks, implementing 802.11r is critical to maintaining application quality during roaming. The reduced latency prevents call drops and video freezes that would otherwise occur during access point transitions in traditional wireless networks.
Question 20
Which HSRP state indicates that a router is actively forwarding traffic for the virtual IP address?
A) Initial
B) Standby
C) Active
D) Listen
Answer: C
Explanation:
The Active state in HSRP indicates that a router is currently forwarding traffic for the virtual IP address and responding to ARP requests for the virtual MAC address. Only one router in an HSRP group can be in the Active state at any given time, serving as the default gateway for hosts on the network segment.
When a router enters the Active state, it assumes full responsibility for forwarding packets destined to the virtual IP address. The Active router responds to ARP requests with the virtual MAC address, sends periodic hello messages to other HSRP group members, and maintains its role until it fails or a router with higher priority preempts it. The Active router’s primary function is to provide gateway services while the Standby router remains ready to take over immediately if needed.
The transition to Active state occurs through HSRP’s election process, which is based on priority values and IP addresses. Routers compare their configured priority values, and the router with the highest priority becomes Active. If priorities are equal, the router with the highest IP address wins the election. The Active router sends hello messages every three seconds by default to inform other group members of its operational status.
The Initial state is the beginning state when HSRP starts, before any election occurs. The Standby state indicates the backup router that will become Active if the current Active router fails. The Listen state means the router is receiving hello messages and monitoring the group but is neither Active nor Standby.
Understanding HSRP states is crucial for troubleshooting gateway redundancy issues. Network administrators can verify HSRP operation by checking the state of routers using show commands, ensuring proper failover configuration, and confirming that traffic is being forwarded through the intended Active router in normal operations.