Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 61
Which OSPF network type requires manual neighbor configuration and does not perform DR/BDR election?
A) Broadcast
B) Non-broadcast
C) Point-to-point
D) Point-to-multipoint non-broadcast
Answer: D
Explanation:
This question tests understanding of OSPF network types and their characteristics regarding neighbor discovery and designated router elections.
OSPF supports different network types that determine how routers discover neighbors and whether they elect designated routers. The network type must match the underlying physical topology and Layer 2 characteristics to function correctly.
Point-to-multipoint non-broadcast networks require manual neighbor configuration because they cannot use multicast for discovery and do not elect DR/BDR because the topology is treated as a collection of point-to-point links.
A) is incorrect because broadcast networks use multicast hello packets for automatic neighbor discovery on the 224.0.0.5 address. Broadcast networks also perform DR and BDR elections to reduce adjacencies and LSA flooding on multi-access segments. Examples include Ethernet LANs where multiple routers share the same broadcast domain. Manual neighbor configuration is not required on broadcast networks.
B) is incorrect because while non-broadcast networks do require manual neighbor configuration due to lack of multicast support, they still perform DR/BDR election. Non-broadcast networks are typically used for Frame Relay and similar NBMA technologies where the topology is multi-access but multicast is not available. The DR/BDR election helps manage adjacencies in the multi-access environment.
C) is incorrect because point-to-point networks use multicast for automatic neighbor discovery and do not perform DR/BDR election since only two routers exist on the link. Point-to-point is the default for serial interfaces and other dedicated links between two routers. No manual configuration is needed, and DR/BDR election is unnecessary with only two neighbors.
D) is correct because point-to-multipoint non-broadcast combines two characteristics: it treats the network as a collection of point-to-point links avoiding DR/BDR election, and it requires manual neighbor configuration because multicast is not available. This network type is useful for NBMA topologies where you want to avoid DR/BDR election complexity. Each router forms direct adjacencies with configured neighbors without electing designated routers.
Question 62:
What is the administrative distance of EIGRP summary routes compared to standard EIGRP internal routes?
A) 5 and 90
B) 90 and 90
C) 5 and 170
D) 170 and 90
Answer: A
Explanation:
This question evaluates knowledge of EIGRP administrative distances and how summary routes are treated differently from standard routes.
Administrative distance determines route preference when multiple routing protocols advertise the same destination. EIGRP uses different administrative distance values for internal routes, external routes, and summary routes to influence routing decisions appropriately.
EIGRP summary routes have an administrative distance of 5, making them highly preferred, while internal routes have an administrative distance of 90.
A) is correct because EIGRP assigns administrative distance 5 to summary routes and 90 to internal routes. The lower administrative distance for summary routes ensures they are always preferred over more specific component routes, which is necessary for proper summarization behavior. When a router creates a summary route, it must prefer the summary over the specific routes being summarized to prevent routing loops. The AD of 5 is lower than any other routing protocol, ensuring summary routes take precedence.
B) is incorrect because summary routes do not use the same administrative distance as internal routes. If both used AD 90, the router would have equal preference between summary and specific routes, which could cause routing inconsistencies. Summary routes need higher preference to function correctly in the routing table, which is achieved through the lower administrative distance of 5.
C) is incorrect because while 5 is correct for summary routes, 170 is the administrative distance for EIGRP external routes, not internal routes. External routes are those redistributed into EIGRP from other routing sources. Internal routes, which are learned through standard EIGRP neighbor relationships within the autonomous system, use AD 90. The combination of 5 and 170 represents summary and external routes respectively.
D) is incorrect because it reverses the administrative distances. EIGRP external routes use 170, and internal routes use 90. This option incorrectly suggests summary routes use the external AD. Summary routes must have the lowest AD (5) to ensure they are preferred over any other route type, preventing routing loops when summarization is configured.
Question 63:
Which QoS model provides per-flow resource reservation and requires signaling protocols like RSVP?
A) Best Effort
B) Integrated Services (IntServ)
C) Differentiated Services (DiffServ)
D) Class-Based Weighted Fair Queuing
Answer: B
Explanation:
This question tests understanding of QoS models and their fundamental characteristics regarding resource reservation and signaling requirements.
Quality of Service models evolved to address different network performance requirements with varying levels of complexity and scalability. Each model has distinct approaches to resource allocation and traffic management.
Integrated Services provides hard QoS guarantees through explicit per-flow resource reservation using signaling protocols like RSVP.
A) is incorrect because Best Effort provides no QoS guarantees or resource reservations whatsoever. It is the default forwarding behavior where all packets are treated equally with no prioritization or bandwidth guarantees. Best Effort does not use signaling protocols and makes no attempt to reserve resources. It represents the absence of QoS mechanisms rather than a QoS model with resource reservation.
B) is correct because Integrated Services (IntServ) is the QoS model that provides per-flow resource reservation using signaling protocols like RSVP. Applications request specific bandwidth and delay guarantees, and RSVP signals these requirements to routers along the path. Each router reserves the requested resources if available, providing hard QoS guarantees. While IntServ provides excellent per-flow control, it suffers scalability issues in large networks due to per-flow state maintenance.
C) is incorrect because Differentiated Services (DiffServ) does not use per-flow resource reservation or signaling protocols. DiffServ provides QoS through traffic classification and marking with DSCP values, treating aggregated traffic classes differently rather than individual flows. Routers apply QoS policies based on DSCP markings without maintaining per-flow state or using signaling. DiffServ scales better than IntServ but provides softer QoS guarantees.
D) is incorrect because Class-Based Weighted Fair Queuing (CBWFQ) is a specific queuing mechanism used within DiffServ, not a QoS model itself. CBWFQ allocates bandwidth to traffic classes but does not perform per-flow resource reservation or use signaling protocols. It is a tool for implementing QoS policies rather than an overarching QoS architecture model like IntServ or DiffServ.
Question 64:
What is the default hello interval for OSPF on broadcast networks?
A) 5 seconds
B) 10 seconds
C) 30 seconds
D) 40 seconds
Answer: B
Explanation:
This question evaluates knowledge of OSPF timers and their default values on different network types.
OSPF uses hello packets to discover and maintain neighbor relationships. The hello interval determines how frequently routers send hello packets, and the dead interval determines how long to wait before declaring a neighbor down. These timers must match between neighbors for adjacency formation.
OSPF uses a 10-second hello interval on broadcast and point-to-point networks by default.
A) is incorrect because 5 seconds is not the default hello interval for OSPF on any standard network type. While administrators can manually configure 5-second intervals for faster convergence, this is a custom setting. The default is more conservative to balance neighbor detection speed with control plane overhead from hello packet processing and transmission.
B) is correct because OSPF uses a 10-second default hello interval on broadcast networks like Ethernet. This means routers send hello packets every 10 seconds to discover and maintain neighbor relationships. The corresponding dead interval is 40 seconds (4 times the hello interval), meaning a neighbor is declared down if no hello is received for 40 seconds. This 10-second interval balances timely failure detection with control plane efficiency.
C) is incorrect because 30 seconds is the default hello interval for OSPF on NBMA (non-broadcast multi-access) networks, not broadcast networks. NBMA networks like Frame Relay use longer timers due to potentially higher latency and stability characteristics. The corresponding dead interval on NBMA is 120 seconds. Different network types have different default timers optimized for their typical characteristics.
D) is incorrect because 40 seconds is the default dead interval on broadcast networks, not the hello interval. The dead interval determines how long to wait without receiving hellos before declaring a neighbor down. It is typically 4 times the hello interval, providing tolerance for multiple missed hellos before considering the neighbor failed. Confusing hello and dead intervals is a common error.
Question 65:
Which VLAN range on Cisco switches cannot be deleted and includes VLANs 1 and 1002-1005?
A) Extended range VLANs
B) Normal range VLANs
C) Reserved VLANs
D) Private VLANs
Answer: C
Explanation:
This question tests understanding of VLAN ranges and which VLANs have special characteristics preventing deletion.
Cisco switches organize VLANs into ranges with different characteristics regarding creation, deletion, and configuration storage. Certain VLANs are reserved by the system for specific purposes and cannot be deleted to maintain system functionality.
Reserved VLANs including VLAN 1 and VLANs 1002-1005 cannot be deleted from Cisco switches.
A) is incorrect because extended range VLANs (1006-4094) can generally be created and deleted. Extended range VLANs are used when the normal range is exhausted and are stored in the running configuration rather than vlan.dat. While extended VLANs have some restrictions (like not supporting VTP in some versions), they are not the category of undeletable VLANs described in the question.
B) is incorrect because normal range VLANs (2-1001) are generally deletable except for the reserved VLANs within that range. Normal range VLANs are stored in vlan.dat and support all VLAN features. While VLAN 1 falls within the normal range, the question specifically asks about VLANs that cannot be deleted, which is a subset of normal range VLANs called reserved VLANs.
C) is correct because reserved VLANs are the specific VLANs that cannot be deleted: VLAN 1 (the default VLAN) and VLANs 1002-1005 (reserved for Token Ring and FDDI). VLAN 1 serves as the default VLAN for all ports and carries certain control protocols. VLANs 1002-1005 are reserved for legacy compatibility with Token Ring and FDDI networks. Cisco IOS prevents deletion of these VLANs to maintain system operation.
D) is incorrect because Private VLANs are a security feature that subdivides VLANs using primary and secondary VLAN relationships, not a range classification. Private VLANs can be deleted like regular VLANs. They are configured using normal or extended range VLAN numbers and are used for port isolation within a VLAN, not for defining undeletable system VLANs.
Question 66:
What Spanning Tree Protocol feature allows a port to transition immediately to forwarding state when connected to an end device?
A) PortFast
B) UplinkFast
C) BackboneFast
D) BPDU Guard
Answer: A
Explanation:
This question assesses knowledge of STP enhancements that improve convergence for specific scenarios involving end-user devices.
Spanning Tree Protocol’s default behavior includes listening and learning states before forwarding, causing 30-50 seconds of delay. This delay is unnecessary for ports connected to end devices that won’t create switching loops, so enhancements exist to expedite port activation.
PortFast allows access ports connected to end devices to bypass listening and learning states, transitioning immediately to forwarding.
A) is correct because PortFast is the STP feature that enables ports connected to end devices to transition immediately to forwarding state without waiting through listening and learning states. PortFast should only be enabled on access ports connected to end stations like computers, printers, or servers that will not generate BPDUs or create loops. When a port with PortFast enabled comes up, it immediately begins forwarding traffic, eliminating the 30-50 second STP convergence delay and improving user experience.
B) is incorrect because UplinkFast accelerates convergence for access layer switches with redundant uplinks to the distribution layer. When the primary uplink fails, UplinkFast quickly transitions the backup uplink to forwarding state. UplinkFast is designed for trunk links with redundant paths, not access ports connected to end devices. It addresses a different convergence scenario than the immediate forwarding needed for end-device connections.
C) is incorrect because BackboneFast reduces convergence time when indirect link failures occur in the network core. BackboneFast allows switches to expire max-age timers faster when detecting indirect failures through inferior BPDUs. This feature addresses convergence after topology changes in the backbone, not the initial port activation for end-device connections. BackboneFast operates on a different timescale and scenario than immediate forwarding.
D) is incorrect because BPDU Guard is a security feature that protects PortFast-enabled ports from receiving BPDUs. If a port with BPDU Guard receives a BPDU, indicating a switch was connected instead of an end device, the port is immediately error-disabled. BPDU Guard complements PortFast by preventing loops but does not itself provide immediate transition to forwarding state. It is a protective mechanism, not a convergence enhancement.
Question 67:
Which First Hop Redundancy Protocol uses virtual MAC address 0000.0C07.ACXX where XX is the HSRP group number?
A) VRRP
B) HSRP version 1
C) HSRP version 2
D) GLBP
Answer: B
Explanation:
This question tests knowledge of First Hop Redundancy Protocol implementations and their specific virtual MAC address formats.
FHRPs provide gateway redundancy using virtual IP and MAC addresses. Different protocols and versions use distinct virtual MAC address ranges that help identify the protocol in use and avoid conflicts.
HSRP version 1 uses virtual MAC addresses in the format 0000.0C07.ACXX where XX represents the group number.
A) is incorrect because VRRP uses virtual MAC addresses in the format 0000.5E00.01XX where XX is the VRRP group number in hexadecimal. VRRP is an open standard protocol defined in RFC 5798, using a different MAC address range than Cisco’s proprietary HSRP. The 0000.5E prefix is assigned to IANA for protocol use, distinguishing VRRP from HSRP implementations.
B) is correct because HSRP version 1 uses the virtual MAC address format 0000.0C07.ACXX where XX represents the HSRP group number in hexadecimal (00-FF, supporting groups 0-255). The 0000.0C prefix indicates Cisco proprietary addresses. HSRP version 1 was the original implementation of Cisco’s First Hop Redundancy Protocol and uses this specific MAC address range to identify the virtual gateway.
C) is incorrect because HSRP version 2 uses a different virtual MAC address format: 0000.0C9F.FXXX where XXX represents the group number. HSRP version 2 was introduced to support more group numbers (0-4095) compared to version 1’s limit of 0-255, requiring a different MAC address format with more bits allocated to the group number. Version 2 also added IPv6 support and improved timers.
D) is incorrect because GLBP (Gateway Load Balancing Protocol) uses virtual MAC addresses in the format 0007.B400.XXYY where XX is the GLBP group number and YY is the AVF (Active Virtual Forwarder) number. GLBP differs from HSRP by providing load balancing across multiple routers rather than active/standby redundancy. Each AVF uses a unique MAC address, allowing clients to be distributed across multiple physical gateways.
Question 68:
What is the maximum number of VLANs supported by 802.1Q standard trunking?
A) 1024
B) 2048
C) 4094
D) 4096
Answer: C
Explanation:
This question evaluates understanding of VLAN tagging standards and the limitations imposed by the 802.1Q frame format.
The 802.1Q standard inserts a 4-byte tag into Ethernet frames to identify VLAN membership. The VLAN ID field within this tag has a specific bit length that determines the maximum number of VLANs supported.
The 802.1Q standard supports 4094 usable VLANs using a 12-bit VLAN ID field.
A) is incorrect because 1024 is not related to the 802.1Q VLAN limitation. This might be confused with other network limits like the maximum number of normal range VLANs in older Cisco implementations (1-1005), though even that doesn’t equal exactly 1024. The 802.1Q standard’s 12-bit VLAN ID field allows for many more VLANs than 1024.
B) is incorrect because 2048 would be the maximum number if only 11 bits were used for the VLAN ID. The 802.1Q standard actually uses a 12-bit VLAN ID field, which allows 2^12 = 4096 possible values. However, not all these values are usable for regular VLAN assignment due to reserved values.
C) is correct because the 802.1Q standard uses a 12-bit VLAN ID field allowing 4096 (2^12) possible values from 0 to 4095. However, VLAN ID 0 is reserved for priority tagging (frames with priority but no VLAN assignment), and VLAN ID 4095 is reserved. This leaves VLANs 1-4094 as usable VLAN IDs, totaling 4094 VLANs. This is the maximum supported by the standard across all vendor implementations.
D) is incorrect because while 4096 represents the total number of possible values in a 12-bit VLAN ID field (2^12 = 4096), not all values are usable. VLAN 0 is reserved for priority frames without VLAN identification, and VLAN 4095 is reserved for implementation use. The usable range is 1-4094, providing 4094 actual VLANs that can be configured and used for network segmentation.
Question 69:
Which EIGRP packet type is used to acknowledge receipt of EIGRP updates?
A) Hello
B) Update
C) Query
D) ACK
Answer: D
Explanation:
This question tests knowledge of EIGRP packet types and their purposes in the routing protocol operation.
EIGRP uses five packet types for different functions including neighbor discovery, route updates, route queries during topology changes, and acknowledging reliable packet delivery. Understanding packet types is essential for troubleshooting EIGRP operations.
ACK packets acknowledge receipt of reliable EIGRP packets including updates, queries, and replies.
A) is incorrect because Hello packets are used for neighbor discovery and keepalive functions. Hello packets are sent periodically (every 5 seconds on most interfaces) to discover neighbors and maintain relationships. Hello packets are unreliable, meaning they are not acknowledged. They establish and maintain adjacencies but do not acknowledge update packets.
B) is incorrect because Update packets carry routing information, advertising routes to neighbors. While Update packets can be sent in response to changes and may contain acknowledgment information in their headers, they are not the primary packet type used solely for acknowledging receipt. Update packets contain actual routing data, whereas acknowledgments are purely confirmation messages.
C) is incorrect because Query packets are used when a route goes down and EIGRP needs to ask neighbors for alternative paths. Queries are sent during the active state of DUAL algorithm operation when a successor route is lost and no feasible successor exists. Query packets are part of the route computation process, not acknowledgment mechanisms for reliable delivery.
D) is correct because ACK (Acknowledgment) packets are specifically used to acknowledge receipt of reliable EIGRP packets. EIGRP uses reliable transport for Updates, Queries, and Replies, meaning these packets must be acknowledged. An ACK is essentially a Hello packet with no data, sent unicast to the neighbor whose packet is being acknowledged. The acknowledgment mechanism ensures reliable delivery of critical routing information in EIGRP.
Question 70:
What is the default VLAN for all switch ports on Cisco switches?
A) VLAN 0
B) VLAN 1
C) VLAN 10
D) VLAN 1002
Answer: B
Explanation:
This question evaluates basic switching knowledge regarding default VLAN assignments on Cisco switch ports.
When Cisco switches boot with factory default configuration, all ports are assigned to a specific VLAN. Understanding default VLAN behavior is fundamental for switch configuration and VLAN management.
VLAN 1 is the default VLAN for all switch ports on Cisco switches with factory configuration.
A) is incorrect because VLAN 0 is not a valid VLAN for port assignment. VLAN 0 is reserved in the 802.1Q standard for priority tagging without VLAN identification. Frames tagged with VLAN 0 carry priority information but no VLAN assignment, used for quality of service purposes. VLAN 0 cannot be configured as the access VLAN for switch ports.
B) is correct because VLAN 1 is the factory default VLAN on all Cisco switch ports. When a switch boots with no configuration, all interfaces are assigned to VLAN 1 as access ports. VLAN 1 also carries control traffic by default including CDP, VTP, PAgP, and DTP. VLAN 1 is a reserved VLAN that cannot be deleted, though best practices recommend moving user traffic to other VLANs for security reasons.
C) is incorrect because VLAN 10 has no special significance in Cisco switching defaults. VLAN 10 is a normal range VLAN that can be created and used like any other VLAN from 2-1001, but it is not the default VLAN assignment. Administrators might choose VLAN 10 for specific purposes, but switches do not default to this VLAN.
D) is incorrect because VLAN 1002 is a reserved VLAN for FDDI (Fiber Distributed Data Interface) default, not the default for switch ports. VLANs 1002-1005 are reserved for Token Ring and FDDI compatibility and cannot be deleted, but they are not used as the default access VLAN. These VLANs exist for legacy technology support, not for standard Ethernet port assignments.
Question 71:
Which routing protocol characteristic describes the use of triggered updates to minimize unnecessary routing traffic?
A) Distance vector with periodic updates
B) Link state with LSA flooding
C) Distance vector with triggered updates
D) Path vector with incremental updates
Answer: C
Explanation:
This question tests understanding of routing protocol update mechanisms and how different protocols minimize routing traffic.
Routing protocols use various update mechanisms to share routing information. The efficiency and convergence speed depend on whether updates are periodic, triggered by changes, or use other mechanisms.
Distance vector protocols with triggered updates send routing information only when topology changes occur, reducing unnecessary traffic.
A) is incorrect because distance vector protocols with periodic updates send complete routing tables at regular intervals regardless of whether changes occurred. RIP is an example, sending updates every 30 seconds. Periodic updates consume bandwidth continuously and delay convergence until the next scheduled update. This is inefficient compared to triggered updates that respond immediately to changes.
B) is incorrect because link state protocols like OSPF and IS-IS use LSA flooding, not simple triggered updates. Link state protocols flood link state advertisements when changes occur, but they maintain a complete network topology database rather than just distance and direction to destinations. Link state operates differently from distance vector with triggered updates, though both respond to topology changes.
C) is correct because distance vector protocols with triggered updates, such as EIGRP and RIP version 2 (when configured), send routing updates only when network topology changes occur. When a route changes or fails, routers immediately send update packets to neighbors rather than waiting for periodic intervals. This approach reduces bandwidth consumption during stable periods and accelerates convergence during changes, combining efficiency with responsiveness.
D) is incorrect because path vector protocols like BGP operate differently from distance vector protocols. BGP maintains path attributes and AS path information, operating at a different scale for inter-domain routing. While BGP uses incremental updates sending only changes rather than complete tables, it is categorized as path vector, not distance vector. The question specifically addresses triggered updates in distance vector protocols.
Question 72:
What is the purpose of the DSCP field in the IPv4 header?
A) Fragmentation control
B) Quality of Service marking
C) Time to Live management
D) Error detection
Answer: B
Explanation:
This question evaluates knowledge of IPv4 header fields and their purposes, specifically the field used for Quality of Service.
The IPv4 header contains multiple fields serving different purposes from addressing to fragmentation to QoS. Understanding header field purposes is essential for network operation and troubleshooting.
The DSCP (Differentiated Services Code Point) field in the IPv4 header is used for Quality of Service packet classification and marking.
A) is incorrect because fragmentation control uses different IPv4 header fields: the Flags field (with Don’t Fragment and More Fragments bits) and Fragment Offset field. These fields control whether packets can be fragmented and help reassemble fragmented packets at the destination. DSCP has no relationship to fragmentation functions.
B) is correct because DSCP (Differentiated Services Code Point) is a 6-bit field in the IPv4 header Type of Service (ToS) byte used for Quality of Service marking. DSCP values classify packets into traffic classes allowing routers to apply appropriate QoS policies including prioritization, bandwidth allocation, and congestion management. DSCP replaced the older IP Precedence field, providing more granular QoS classification with 64 possible values (0-63) compared to IP Precedence’s 8 values.
C) is incorrect because Time to Live (TTL) is a separate 8-bit field in the IPv4 header that prevents packets from looping indefinitely. TTL is decremented by each router; when it reaches zero, the packet is discarded. TTL management prevents routing loops from consuming network resources infinitely but is unrelated to QoS marking provided by DSCP.
D) is incorrect because error detection in IPv4 uses the Header Checksum field, which verifies header integrity. The checksum is calculated over the IPv4 header fields and verified by each router. If the checksum fails, the packet is discarded. DSCP is not involved in error detection; it serves solely for QoS classification and marking.
Question 73:
Which WLAN security protocol provides the strongest encryption and is recommended for enterprise wireless networks?
A) WEP
B) WPA
C) WPA2
D) WPA3
Answer: D
Explanation:
This question tests knowledge of wireless security protocols and their relative strengths in protecting WLAN communications.
Wireless security has evolved through multiple generations as vulnerabilities were discovered and encryption strength requirements increased. Understanding protocol progression helps in selecting appropriate security for network deployments.
WPA3 is the latest and strongest wireless security protocol providing improved encryption and authentication methods.
A) is incorrect because WEP (Wired Equivalent Privacy) is the oldest and weakest wireless security protocol, completely broken and easily compromised within minutes. WEP uses RC4 encryption with static keys that can be cracked using readily available tools. WEP should never be used in any environment as it provides virtually no security against modern attacks.
B) is incorrect because WPA (Wi-Fi Protected Access) was developed as a temporary improvement over WEP but has known vulnerabilities. WPA uses TKIP (Temporal Key Integrity Protocol) which was designed to work with existing WEP hardware, limiting its security improvements. WPA is deprecated and should not be used in modern networks as more secure options are available.
C) is incorrect because while WPA2 represented a major security improvement using AES encryption with CCMP, it is no longer the strongest available option. WPA2 has been the enterprise standard for years but has known vulnerabilities including KRACK (Key Reinstallation Attack) discovered in 2017. Although still acceptable in many environments, WPA3 provides superior security.
D) is correct because WPA3 is the latest wireless security protocol offering the strongest protection for WLAN environments. WPA3 includes 192-bit encryption in enterprise mode, individualized data encryption protecting against eavesdropping, protection against brute-force attacks with SAE (Simultaneous Authentication of Equals) replacing PSK, and forward secrecy ensuring past communications remain secure even if passwords are compromised. WPA3 is recommended for all new wireless deployments.
Question 74:
What NAT type translates multiple inside local addresses to multiple inside global addresses on a one-to-one basis?
A) Static NAT
B) Dynamic NAT
C) PAT (Port Address Translation)
D) Policy NAT
Answer: B
Explanation:
This question evaluates understanding of Network Address Translation types and their translation behaviors.
NAT translates between private and public IP addresses using different methods depending on requirements for static mappings, address conservation, and transparency. Each NAT type serves different use cases.
Dynamic NAT translates multiple inside addresses to multiple outside addresses using a pool, providing one-to-one mappings that change over time.
A) is incorrect because Static NAT creates permanent one-to-one mappings between specific inside local addresses and inside global addresses. While Static NAT does provide one-to-one translation, the mappings are manually configured and fixed, not dynamically assigned from a pool. Static NAT is typically used for servers requiring consistent public addresses, not for general user populations.
B) is correct because Dynamic NAT translates multiple inside local addresses to multiple inside global addresses on a one-to-one basis from a pool of global addresses. When an inside host initiates communication, it is dynamically assigned an available address from the global pool. The mapping is maintained for the duration of the session, then released back to the pool. Dynamic NAT provides address translation without port translation, maintaining one-to-one relationships but allowing pool reuse.
C) is incorrect because PAT (Port Address Translation), also called NAT Overload, translates multiple inside local addresses to a single or few inside global addresses using port numbers to distinguish connections. PAT is many-to-one translation, not one-to-one. PAT conserves global addresses by multiplexing many internal hosts through few external addresses, using source port numbers to maintain session uniqueness.
D) is incorrect because Policy NAT selects NAT translation based on criteria beyond source address, such as destination address or traffic type. Policy NAT defines when NAT is applied rather than the one-to-one or many-to-one relationship of the translation. Policy NAT can use static, dynamic, or PAT translation methods depending on configuration, making it a classification of NAT rules rather than a translation type.
Question 75:
Which IPv6 address type is used for one-to-nearest communication, delivering packets to the nearest interface of multiple possible destinations?
A) Unicast
B) Multicast
C) Anycast
D) Broadcast
Answer: C
Explanation:
This question tests knowledge of IPv6 address types and their communication models.
IPv6 uses different address types for various communication patterns. Understanding these types is essential for IPv6 network design and troubleshooting.
Anycast addresses deliver packets to the nearest of multiple interfaces sharing the same address, used for load distribution and redundancy.
A) is incorrect because Unicast addresses identify a single interface for one-to-one communication. When a packet is sent to a unicast address, it is delivered to exactly one destination. Unicast is the most common address type for standard host-to-host communication but does not provide the nearest-node delivery described in the question.
B) is incorrect because Multicast addresses identify groups of interfaces for one-to-many communication. Packets sent to multicast addresses are delivered to all interfaces in the multicast group, not just the nearest one. Multicast is used for efficient distribution to multiple recipients simultaneously, like routing protocol updates or media streaming.
C) is correct because Anycast addresses are assigned to multiple interfaces, typically on different nodes, with packets delivered to the nearest interface based on routing protocol metrics. Anycast enables redundancy and load distribution as clients automatically connect to the nearest service instance. Common uses include DNS root servers, CDN nodes, and gateway services. IPv6 formalizes anycast as a standard address type, unlike IPv4 where it was less formally defined.
D) is incorrect because IPv6 does not have broadcast addresses. IPv4 used broadcast for one-to-all communication within a subnet, but IPv6 eliminated broadcast in favor of multicast for efficiency. Link-local all-nodes multicast (ff02::1) provides similar functionality to IPv4 broadcast but with better scalability. Broadcast is not an IPv6 address type.
Question 76
Which routing protocol uses the Diffusing Update Algorithm (DUAL) to calculate the best path to a destination?
A) OSPF
B) RIP
C) EIGRP
D) BGP
Answer: C
Explanation:
EIGRP (Enhanced Interior Gateway Routing Protocol) is the routing protocol that uses the Diffusing Update Algorithm (DUAL) to calculate and maintain the best path to network destinations. DUAL is a sophisticated algorithm that guarantees loop-free operation at every instant throughout a route computation, making EIGRP unique among routing protocols.
DUAL works by maintaining a topology table that contains all routes learned from neighbors, including both the best path (successor) and backup paths (feasible successors). When a route fails, DUAL can immediately switch to a pre-calculated backup route if a feasible successor exists, providing extremely fast convergence without requiring a full route recalculation. If no feasible successor is available, DUAL initiates a diffusing computation to query neighbors for alternative paths.
The algorithm uses several key concepts to ensure loop-free operation. The feasibility condition compares the advertised distance from a neighbor to the current feasible distance, ensuring that accepting a route won’t create a routing loop. DUAL maintains finite state machines for each route, tracking whether the route is passive (stable) or active (being recomputed). This sophisticated approach allows EIGRP to converge faster than traditional distance vector protocols while maintaining the simplicity and low overhead compared to link-state protocols.
OSPF uses the Shortest Path First (SPF) algorithm based on Dijkstra’s algorithm to calculate routes. RIP uses the Bellman-Ford algorithm for route computation. BGP uses the Best Path Selection algorithm that evaluates multiple attributes including AS path length, origin type, and local preference.
Understanding DUAL is essential for troubleshooting EIGRP networks and optimizing routing performance. The algorithm’s ability to maintain backup routes and perform local computations without involving all routers in the network makes EIGRP particularly efficient in large enterprise environments where fast convergence and minimal bandwidth usage are critical requirements.
Question 77
An administrator wants to configure a switch to automatically disable a port if it receives BPDUs, preventing potential switching loops from unauthorized switches. Which feature should be enabled?
A) PortFast
B) BPDU Guard
C) Root Guard
D) Loop Guard
Answer: B
Explanation:
BPDU Guard is the Spanning Tree Protocol security feature that automatically disables a switch port if it receives Bridge Protocol Data Units (BPDUs), protecting the network from potential switching loops caused by unauthorized or misconfigured switches. This feature is typically enabled on access ports where end devices are connected and no BPDUs should ever be received.
When BPDU Guard is enabled on a port, the switch continuously monitors for incoming BPDUs. If a BPDU is detected, the feature immediately places the port into an error-disabled state, effectively shutting it down. This aggressive action prevents a rogue switch from participating in spanning tree calculations and potentially disrupting the network topology. The administrator must manually re-enable the port or configure automatic recovery after a timeout period.
BPDU Guard is most commonly used in conjunction with PortFast on access layer ports. PortFast allows a port to immediately transition to the forwarding state without going through the normal spanning tree listening and learning stages, improving connectivity time for end devices. Since PortFast bypasses spanning tree convergence, it should only be used where switches will never be connected. BPDU Guard provides the safety mechanism that disables the port if this assumption is violated.
PortFast itself speeds up port initialization but doesn’t provide protection against unauthorized switches. Root Guard prevents a port from becoming the root port, protecting against topology changes from unauthorized superior BPDUs but doesn’t disable the port. Loop Guard prevents alternate or root ports from becoming designated ports due to unidirectional link failures but doesn’t respond to received BPDUs.
The combination of PortFast and BPDU Guard is considered a best practice for all access layer ports connecting to end devices. This configuration optimizes user connectivity experience while maintaining network stability and security by ensuring that only authorized network infrastructure devices participate in spanning tree operations.
Question 78
Which First Hop Redundancy Protocol feature allows multiple routers to share the load of forwarding traffic for a subnet?
A) HSRP with Priority
B) VRRP with Preemption
C) GLBP
D) HSRP with Tracking
Answer: C
Explanation:
Gateway Load Balancing Protocol (GLBP) is the First Hop Redundancy Protocol that enables multiple routers to simultaneously share the traffic load for a subnet, providing both redundancy and load balancing. Unlike HSRP and VRRP where only one router actively forwards traffic while others remain in standby, GLBP allows all routers in the group to forward traffic concurrently.
GLBP achieves load balancing by using a single virtual IP address but multiple virtual MAC addresses. One router in the GLBP group is elected as the Active Virtual Gateway (AVG), which is responsible for assigning virtual MAC addresses to other group members and responding to ARP requests from hosts. When a host sends an ARP request for the default gateway, the AVG responds with one of the virtual MAC addresses, distributing different MAC addresses to different hosts to balance traffic across all routers.
The protocol supports multiple load-balancing algorithms including round-robin, weighted load balancing based on configured weights, and host-dependent where the same MAC address is always returned to a specific host. Each router in the GLBP group acts as an Active Virtual Forwarder (AVF) for its assigned virtual MAC address, actively forwarding traffic from hosts that have learned its MAC address. If an AVF fails, another router automatically assumes responsibility for forwarding traffic for that MAC address.
HSRP with Priority and VRRP with Preemption provide failover capabilities but don’t offer load balancing, as only one router forwards traffic at a time. HSRP with Tracking adjusts router priority based on interface or object availability but still operates in an active-standby model without load distribution.
GLBP is particularly valuable in networks where gateway bandwidth utilization is a concern and where the investment in multiple routers should provide performance benefits beyond just redundancy. The protocol’s ability to automatically redistribute traffic when a forwarder fails ensures continuous load balancing even during partial group failures.
Question 79
A network administrator needs to configure inter-VLAN routing on a Layer 3 switch. Which command is used to enable routing functionality?
A) ip routing
B) router eigrp
C) switchport mode trunk
D) vlan database
Answer: A
Explanation:
The ip routing command is the global configuration command that enables routing functionality on a Layer 3 switch, allowing it to forward traffic between different VLANs and subnets. Without this command, even a switch with Layer 3 capabilities operates only as a Layer 2 device and cannot perform inter-VLAN routing.
When ip routing is enabled, the switch begins building a routing table and can make forwarding decisions based on IP addresses rather than just MAC addresses. The switch can then be configured with Switch Virtual Interfaces (SVIs) for each VLAN that requires routing, and these SVIs function as the default gateways for hosts in their respective VLANs. The switch performs routing lookups in its routing table to determine the egress interface for packets destined to different subnets.
After enabling ip routing, administrators typically create SVIs using the interface vlan command and assign IP addresses to these interfaces. The switch treats these SVIs as routed interfaces and can exchange routing information with other Layer 3 devices using routing protocols like OSPF or EIGRP. The Layer 3 switch then performs hardware-based routing using application-specific integrated circuits (ASICs), providing wire-speed routing performance between VLANs.
The router eigrp command configures the EIGRP routing protocol but doesn’t enable basic routing functionality. The switchport mode trunk command configures a port for VLAN trunking at Layer 2 but has no relation to Layer 3 routing. The vlan database command was used in older switch configurations to manage VLANs but doesn’t enable routing capabilities.
Understanding the ip routing command is fundamental for implementing inter-VLAN routing on Layer 3 switches. This approach is preferred over router-on-a-stick configurations because it eliminates the bottleneck of routing all inter-VLAN traffic through a single router interface, instead distributing the routing function across the switch fabric for improved performance.
Question 80
Which protocol is used by Cisco DNA Center to communicate with network devices for configuration and management?
A) SNMP
B) NETCONF/RESTCONF
C) Telnet
D) TFTP
Answer: B
Explanation:
NETCONF and RESTCONF are the modern network management protocols used by Cisco DNA Center to communicate with network devices for configuration and management tasks. These protocols provide programmatic interfaces for network automation and are fundamental to software-defined networking and intent-based networking architectures.
NETCONF (Network Configuration Protocol) is an IETF standard protocol that uses XML encoding and operates over secure transport protocols like SSH. It provides mechanisms to install, manipulate, and delete device configurations through structured data models defined in YANG (Yet Another Next Generation). NETCONF separates configuration data from operational data and supports transactional operations with rollback capabilities, making it more robust than traditional CLI-based management.
RESTCONF provides similar functionality to NETCONF but uses RESTful APIs with HTTP/HTTPS as the transport protocol and supports both XML and JSON data encoding. This makes RESTCONF more accessible to web developers and easier to integrate with modern application development frameworks. DNA Center uses both protocols depending on device capabilities and specific use cases, with RESTCONF often preferred for its simplicity and alignment with contemporary API design practices.
SNMP is a traditional network monitoring protocol that can retrieve device information and perform limited configuration changes, but it lacks the transactional capabilities and structured data models of NETCONF/RESTCONF. Telnet provides command-line access to devices but is insecure and not suitable for programmatic automation. TFTP is a simple file transfer protocol used for backing up and restoring configurations but doesn’t provide interactive configuration management.
The adoption of NETCONF and RESTCONF represents a fundamental shift in network management toward automation and programmability. These protocols enable DNA Center to implement intent-based networking by translating high-level business intent into specific device configurations, automatically deploying those configurations across the network infrastructure, and continuously verifying that the network is operating according to intent.