Palo Alto Networks NGFW-Engineer Next-Generation Firewall Engineer Exam Dumps and Practice Test Questions Set9 Q161-180

Visit here for our full Palo Alto Networks NGFW-Engineer exam dumps and practice test questions.

Question 161 

A network security engineer is designing an Active/Active High Availability cluster. A primary design consideration is preventing asymmetric routing, where a session’s originating traffic egresses Firewall-A and the return traffic ingresses Firewall-B. The engineer wants to ensure that the firewall that performs the initial slowpath session setup is deterministically the same firewall for both the client-to-server and server-to-client flows. Which Active/Active HA configuration setting provides this deterministic session setup behavior?

A) Session Setup Rule: First Packet
B) Session Setup Rule: IP Hash (Source + Destination)
C) Session Owner Selection: Primary Device
D) HA Virtual Address Type: ARP Load Sharing

Correct Answer: B

Explanation:

This question targets a highly advanced and critical component of Active/Active HA design: ensuring that session setup is deterministic to prevent duplicate sessions in asymmetrically routed environments. The challenge is that the first packet of a flow can arrive at different firewalls depending on the direction of the traffic (client-to-server vs. server-to-client).

Why B) Session Setup Rule: IP Hash (Source + Destination) is Correct: This is the specific feature designed for this exact conundrum. When the Session Setup Rule is configured for IP Hash (Source + Destination), the firewall performs a specific operation on the first packet. It takes the source and destination IP addresses from the packet, sorts them (e.g., lowest to highest), and then performs a hash calculation. The result of this hash deterministically assigns the session setup task to one of the two firewalls. Because the IPs are sorted before hashing, the result is identical regardless of the packet’s direction. A client-to-server packet (Source: A, Dest: B) and a server-to-client packet (Source: B, Dest: A) will both have their IPs sorted to (A, B), yielding the same hash result and thus selecting the same firewall for session setup. This prevents a race condition where both firewalls try to set up the same session.

Why A) Session Setup Rule: First Packet is Incorrect: This is a non-deterministic method in an asymmetric environment. If the client-to-server first packet arrives at Firewall-A, Firewall-A will set up the session. If the server-to-client first packet (e.g., a reply to a packet that was routed asymmetrically) arrives at Firewall-B, Firewall-B will also try to set up the session, as it was the first packet it saw. This leads to duplicate sessions, state confusion, and dropped packets. This setting is only safe in perfectly symmetric routing environments.

Why C) Session Owner Selection: Primary Device is Incorrect: This setting confuses two different A/A concepts: Session Owner and Session Setup. The Session Owner is the firewall responsible for the fastpath and Layer 7 inspection of an already-established session. The Session Setup is the slowpath process of creating that session. While related, this setting dictates who owns the session, not who builds it. You can have the Primary Device own all sessions, but if the Session Setup rule is First Packet, you can still have a duplicate session race condition.

Why D) HA Virtual Address Type: ARP Load Sharing is Incorrect: This configuration dictates how the firewalls present themselves to the network to attract traffic. ARP Load Sharing is a method where both firewalls respond to ARP requests for a shared floating IP, using a hash to distribute the load. This is a mechanism for distributing traffic to the cluster, but it does not solve the internal session setup logic, especially if the return path is governed by a different router that does not participate in this load-sharing mechanism.

Question 162 

A network administrator is configuring an SSL Inbound Inspection policy to protect an internal web server. This requires importing the web server’s certificate and its corresponding private key onto the Palo Alto Networks firewall. The web server’s certificate is signed by an intermediate certificate authority (ICA), which in turn is signed by a trusted root CA. The web server itself is only configured with the server certificate. What must the administrator import onto the firewall to ensure external clients can successfully validate the certificate chain?

A) Only the server’s private key. The firewall will automatically fetch the public certificate from the server.
B) The server certificate, its private key, and the intermediate CA’s certificate, which must be configured as a ‘Trusted Root CA’.
C) The server certificate and its private key. The client’s browser is responsible for fetching the intermediate certificate.
D) The server certificate, its private key, and the intermediate CA’s certificate, which should be bundled in the same file or imported separately.

Correct Answer: D

Explanation:

This question tests the practical implementation of SSL Inbound Inspection, which involves the firewall impersonating the end server. This impersonation must be perfect, or it will be rejected by the client’s browser. A key part of this is presenting a complete and valid certificate chain.

Why D) The server certificate, its private key, and the intermediate CA’s certificate, which should be bundled in the same file or imported separately is Correct: For SSL Inbound Inspection, the firewall terminates the client’s SSL session. To do this, it must have the server’s end-entity (leaf) certificate and its associated private key. However, when a client browser receives this leaf certificate, it will attempt to validate it. The browser sees it is signed by an intermediate CA, which it may not trust. The server (in this case, the firewall) is responsible for also sending the intermediate certificate as part of the TLS handshake. This allows the browser to build a chain of trust from the leaf certificate, to the intermediate certificate, and finally to the root CA (which the browser already trusts). Therefore, the administrator must load both the server’s cert/key pair and the intermediate CA’s public certificate onto the firewall and associate it with the leaf certificate.

Why A) Only the server’s private key. The firewall will automatically fetch the public certificate from the server is Incorrect: This is fundamentally incorrect. The firewall requires both the private key and the public certificate to be imported. It cannot function with only the private key and has no mechanism to fetch the public certificate for this purpose.

Why B) The server certificate, its private key, and the intermediate CA’s certificate, which must be configured as a ‘Trusted Root CA’ is Incorrect: This is a subtle but critical error. The intermediate CA’s certificate must be imported, but it must not be marked as a ‘Trusted Root CA’. A ‘Trusted Root CA’ is a certificate used to validate incoming certificates (as in SSL Forward Proxy). In this case, the intermediate certificate is not being used to validate anything; it is being served to clients as part of the certificate chain. Marking it as a root CA is a common misconfiguration that will not solve the problem.

Why C) The server certificate and its private key. The client’s browser is responsible for fetching the intermediate certificate is Incorrect: While browsers can sometimes fetch missing intermediate certificates using a mechanism called AIA (Authority Information Access), this is not guaranteed, adds latency, and is not a robust design. The TLS protocol is designed for the server to present its full chain (minus the root). Relying on AIA fetching is poor practice, and the firewall is designed to serve the full chain properly when configured with all the necessary certificates.

Question 163 

An administrator is configuring a firewall that is managed by Panorama. The administrator needs to configure an interface IP address and a default route. These settings are unique for every firewall in the organization. The administrator also needs to apply a set of 50 common Security policy rules to all firewalls. What is the correct Panorama construct to manage these disparate and common settings?

A) Use a Template for the unique network settings and a separate Template Stack for the common Security policies.
B) Use a Device Group for the unique network settings and a Template for the common Security policies.
C) Use a Template Stack for the unique network settings (using variables) and a Device Group for the common Security policies.
D) Use a single Device Group with Overrides for the unique network settings and a shared base for the Security policies.

Correct Answer: C

Explanation:

This question is fundamental to understanding the hierarchical management architecture of Panorama and the strict division of labor between its two primary components: Templates and Device Groups. This architecture is designed to maximize reusability while allowing for unique settings.

Why C) Use a Template Stack for the unique network settings (using variables) and a Device Group for the common Security policies is Correct: This is the text-book implementation.

Device Group: This construct is used exclusively for managing shared Policies and Objects (Address, Service, Security, NAT, Decryption, etc.). The administrator would create one common Device Group, place all firewalls into it, and create the 50 common Security policy rules there. This ensures policy consistency.

Template Stack: This construct is used exclusively for managing Network and Device settings (Interfaces, Virtual Routers, Zones, HA, etc.). Since the network settings are unique, the administrator would create a base Template with a Template Variable for the interface IP and default route (e.g., $mgmt_ip, $default_gw). This Template would be placed in a Stack, and all firewalls would be assigned to that stack. The administrator can then provide the unique IP/gateway values for each firewall in the stack, satisfying the unique requirement while still using a common configuration base.

Why A) Use a Template for the unique network settings and a separate Template Stack for the common Security policies is Incorrect: This confuses the roles. Security policies are managed by Device Groups, not Templates or Template Stacks. This configuration is impossible.

Why B) Use a Device Group for the unique network settings and a Template for the common Security policies is Incorrect: This is the most incorrect option, as it completely reverses the roles of Templates and Device Groups. Device Groups are for policies, and Templates are for network/device settings.

Why D) Use a single Device Group with Overrides for the unique network settings and a shared base for the Security policies is Incorrect: This is fundamentally incorrect because overrides do not apply to Device Groups. The concept of variables and overrides belongs exclusively to Templates. All objects and policies within a specific Device Group are identical for all members of that group; they cannot be unique per device.

Question 164 

A security engineer needs to configure a Quality of Service (QoS) policy to manage network congestion. The company has a critical, custom in-house application that runs on TCP port 55123. The goal is to guarantee a minimum amount of bandwidth for this application. App-ID identifies this traffic as ‘unknown-tcp’. The administrator has already created a custom application signature (‘custom-app’) for this traffic. Which object is used to link the ‘custom-app’ App-ID to a specific bandwidth guarantee?

A) A QoS Profile
B) A QoS Policy Rule
C) A Traffic Distribution Profile
D) An Application Override rule

Correct Answer: B

Explanation:

This question tests the components of a Quality of Service (QoS) configuration. QoS on the Palo Alto Networks firewall is a multi-step process involving defining classes, creating a profile, and then creating policy rules to classify traffic.

Why B) A QoS Policy Rule is Correct: The QoS Policy Rule is the component that performs the classification. It functions similarly to a Security policy rule. The administrator would create a QoS Policy rule that matches the specific traffic (e.g., Source Zone: Trust, Application: ‘custom-app’). The action of this rule is not allow or deny, but rather to assign the matched traffic to a specific QoS Class (e.g., Class 2). This Class 2 would have been previously defined by the administrator (in the QoS Profile) as having a guaranteed bandwidth, thus meeting the requirement.

Why A) A QoS Profile is Incorrect: The QoS Profile is where the administrator defines the bandwidth for the different classes (e.g., Class 1 has 10% guaranteed, Class 2 has 50% guaranteed, etc.). However, this profile itself does not perform the classification. It is just a template of bandwidth allocations. The QoS Policy Rule is what actually inspects the traffic and decides which class it belongs to.

Why C) A Traffic Distribution Profile is Incorrect: This object is associated with the SD-WAN feature, not the QoS feature. A Traffic Distribution Profile is used to decide which ISP link to send traffic to, based on path health or load. It does not manage bandwidth guarantees or priority on a single link, which is the function of QoS.

Why D) An Application Override rule is Incorrect: An Application Override rule is used to bypass App-ID inspection and force a specific application label based on a port. While this might be a prerequisite if the administrator didn’t have a custom signature, the scenario states they do have a custom signature. Furthermore, the Application Override rule does not, by itself, apply any QoS; it only aids in classification, which is then used by the QoS Policy Rule.

Question 165 

An administrator is configuring a Palo Alto Networks firewall in a public cloud environment and is using an External Dynamic List (EDL) to block a list of malicious IP addresses hosted on a web server. The firewall is in a production environment and cannot be permitted to access the internet directly for management tasks. How can the firewall securely and successfully retrieve updates for this EDL?

A) The firewall will automatically use the data plane interfaces to fetch the list, as the EDL is a policy object.
B) Configure a Service Route so that traffic destined for the EDL’s web server originates from a data plane interface, not the management interface.
C) Configure the web server to push the EDL to the firewall’s management interface via an API call.
D) This is not possible; the management interface must have direct internet access for all updates, including EDLs.

Correct Answer: B

Explanation:

This question addresses a common and critical secure-deployment scenario for firewalls, especially in the cloud. The management interface should be isolated, but core services still need to get updates. The solution is to force specific management-plane traffic to use the more secure, policy-controlled data-plane interfaces.

Why B) Configure a Service Route so that traffic destined for the EDL’s web server originates from a data plane interface, not the management interface is Correct: A Service Route is a specific feature that overrides the default behavior of the firewall’s management plane. By default, the management plane (mgm’t interface) tries to source all its own traffic (e.g., Panorama connections, DNS queries, NTP, and EDL updates) from itself. A Service Route allows the administrator to create a policy that says, For traffic destined to this specific IP/network (the EDL web server), source the traffic from this data plane interface instead. This forces the EDL update traffic to egress a data-plane interface (like ‘Trust’ or ‘Untrust’), where it is subject to all Security and NAT policies, allowing for secure, controlled access.

Why A) The firewall will automatically use the data plane interfaces to fetch the list, as the EDL is a policy object is Incorrect: This is false. By default, the management plane generates this traffic, and it will use the management interface’s routing table. This is the exact problem the Service Route is designed to solve.

Why C) Configure the web server to push the EDL to the firewall’s management interface via an API call is Incorrect: The EDL mechanism is a pull model, not a push model. The firewall is responsible for initiating the connection to the web server at a configured interval to download the list. The web server has no awareness of the firewall and cannot push an update to it.

Why D) This is not possible; the management interface must have direct internet access for all updates, including EDLs is Incorrect: This is a common but incorrect assumption and represents an insecure design. Forcing all management traffic to go over the isolated management port is a best practice, but when that port is isolated, Service Routes are the mechanism to allow specific, necessary services to egress via the data plane.

Question 166 

An administrator needs to deploy a VM-Series firewall in AWS. The deployment must be fully automated. The firewall must license itself, register with Panorama, and download a specific PAN-OS software version upon its initial boot. The administrator has created an init-cfg.txt file and a bootstrap.xml file. Where must these files be placed for the VM-Series to consume them during automated deployment in AWS?

A) In the AWS EC2 instance metadata (User Data).
B) On the Panorama server, which will push them to the VM-Series upon registration.
C) In an AWS S3 bucket that is made accessible to the VM-Series instance via an Instance Profile.
D) On an internal web server, with the IP address specified in the EC2 instance’s ‘Hostname’ tag.

Correct Answer: C

Explanation:

This question focuses on the bootstrapping process for VM-Series firewalls in a public cloud, a critical component of automation and infrastructure-as-code. The firewall needs a way to get its initial day zero configuration before it even knows how to talk to Panorama.

Why C) In an AWS S3 bucket that is made accessible to the VM-Series instance via an Instance Profile is Correct: This is the standard, supported method for bootstrapping in AWS. The administrator stages all the necessary files (init-cfg.txt for initial settings, bootstrap.xml for the base configuration, license files, software images, etc.) into a folder within an S3 bucket. The VM-Series instance is then launched with an Instance Profile (an IAM role) that grants it read-access to this specific S3 bucket. A small script in the User Data field (metadata) tells the firewall the name of the S3 bucket to connect to. The firewall boots, reads this instruction, uses its IAM role to securely access the S3 bucket, and downloads the entire contents, beginning the automated configuration process.

Why A) In the AWS EC2 instance metadata (User Data) is Incorrect: The User Data field is used to pass the instructions to the VM, not the files themselves. For example, the User Data field is where you would put the command that tells the firewall which S3 bucket to look in. The metadata field has a small size limit (e.g., 16KB) and cannot be used to hold large files like a full PAN-OS software image or a complex bootstrap.xml file.

B) On the Panorama server, which will push them to the VM-Series upon registration is Incorrect: This is a chicken and egg problem. The firewall cannot register with Panorama until after it has been bootstrapped. The init-cfg.txt file is what contains the IP address of the Panorama server and the authentication keys needed to register. The bootstrap process must happen first.

Why D) On an internal web server, with the IP address specified in the EC2 instance’s ‘Hostname’ tag is Incorrect: While bootstrapping from an internal web server is a supported method, it is not the cloud-native method for AWS and is far more complex. It would require the web server to be built and accessible, and the IP would be passed via User Data, not a ‘Hostname’ tag. The S3 method is the standard and expected answer for an AWS environment.

Question 167 

A security administrator is analyzing a complex threat and observes traffic that App-ID identifies as ‘ssl’. The administrator needs to know the specific domain the user is accessing, which is hidden inside the encrypted Client-Hello packet’s SNI (Server Name Indication) extension. The company has a no-decryption policy for this traffic, but it still needs to log the SNI. Which component, when configured, will provide this information in the Traffic log?

A) This is not possible; the SNI is encrypted and can only be read with full SSL Forward Proxy decryption.
B) A URL Filtering profile, which will extract and log the SNI field even without full decryption.
C) A Decryption Profile configured with the ‘no-decrypt’ action.
D) An Anti-Spyware profile with DNS Security enabled.

Correct Answer: B

Explanation:

This question explores a specific, valuable feature of the URL Filtering license that provides visibility into encrypted traffic without performing full decryption. This is a common requirement for organizations that are hesitant to decrypt sensitive traffic but still need visibility.

Why B) A URL Filtering profile, which will extract and log the SNI field even without full decryption is Correct: The SNI field in a Client-Hello message is sent in clear text as part of the initial SSL/TLS handshake. It is not encrypted. This is necessary so that the web server, which might be hosting hundreds of different domains on one IP, knows which certificate to present. The Palo Alto Networks firewall’s URL Filtering engine is designed to parse this initial Client-Hello. When a URL Filtering profile is attached to the Security policy rule that allows the ‘ssl’ traffic, the firewall will automatically peek into the handshake, extract the SNI (e.g., www.badsite.com), and log this domain in the URL/Host column of the Traffic and URL logs. This provides domain-level visibility without decrypting the payload.

Why A) This is not possible; the SNI is encrypted and can only be read with full SSL Forward Proxy decryption is Incorrect: This is a common misconception. The SNI itself is not encrypted. The rest of the session, including the HTTP request and the server’s certificate (in modern TLS), becomes encrypted after the Client-Hello.

Why C) A Decryption Profile configured with the ‘no-decrypt’ action is Incorrect: A Decryption Profile is part of the Decryption policy. While a ‘no-decrypt’ rule would certainly not decrypt the traffic, it is not the feature that enables SNI logging. The SNI logging is a function of the URL Filtering profile, which is attached to the Security policy.

Why D) An Anti-Spyware profile with DNS Security enabled is Incorrect: An Anti-Spyware profile with DNS Security is used to inspect the DNS protocol (UDP/53 or DNS-over-HTTPS). It has no role in inspecting an ‘ssl’ (TCP/443) session or parsing the Client-Hello packet.

Question 168 

A firewall administrator has configured two GlobalProtect Gateways: one in North America (NA) and one in Europe (EU). The Portal is configured to provide a list of both gateways to all users. The administrator wants users in North America to connect only to the NA gateway and users in Europe to connect only to the EU gateway. What GlobalProtect Portal configuration should be used to enforce this geographic segmentation?

A) Configure the Gateway Priority setting, giving the NA gateway a higher priority.
B) Configure the Gateway Source Region setting within the agent’s external gateway configuration.
C) Configure two separate Portals, one for each region, and use DNS-based routing to direct users.
D) Configure a HIP Profile to check the client’s source IP and assign the correct gateway.

Correct Answer: B

Explanation:

This question targets the built-in mechanism that GlobalProtect uses to provide a geographically-aware gateway list to connecting clients, ensuring optimal performance and regional policy enforcement.

Why B) Configure the Gateway Source Region setting within the agent’s external gateway configuration is Correct: This is the feature explicitly designed for this use case. When configuring the list of external gateways within the Portal’s agent configuration, each gateway entry has a Source Region field. The administrator can create an entry for the NA gateway and select North America from the region list. They can create a second entry for the EU gateway and select Europe. When a user connects to the Portal, the Portal looks at the user’s source IP address, determines its geographic region, and then provides the client with a filtered list of gateways that match that region. The European user will only be given the EU gateway, and the NA user will only be given the NA gateway.

Why A) Configure the Gateway Priority setting, giving the NA gateway a higher priority is Incorrect: The Priority setting is used for failover within a region. For example, if there were two NA gateways (NA-Primary and NA-Backup), the administrator would give NA-Primary a higher priority. This setting does not segment users by their source location; it would simply cause all users (both NA and EU) to try and connect to the NA gateway first.

Why C) Configure two separate Portals, one for each region, and use DNS-based routing to direct users is Incorrect: This is a vastly over-engineered, complex, and expensive solution to a problem that the Portal can solve natively. It would require managing separate portal configurations, certificates, and complex DNS policies.

Why D) Configure a HIP Profile to check the client’s source IP and assign the correct gateway is Incorrect: A Host Information Profile (HIP) is used to check the posture of the endpoint (e.g., antivirus, patch level). It is not used to check the location of the endpoint for the purpose of gateway selection. The Source Region feature is the correct mechanism.

Question 169 

An administrator is configuring a new virtual router and needs to ensure that all traffic destined for the internal subnet 10.50.0.0/16 is forwarded to an internal router at 10.1.1.254. All other traffic should be forwarded to the primary ISP gateway at 203.0.113.1. How should the virtual router’s static route table be configured to achieve this?

A) One route: 0.0.0.0/0, Next-Hop 203.0.113.1. One PBF rule: Source 10.1.0.0/16, Dest 10.50.0.0/16, Next-Hop 10.1.1.254.
B) Two routes: 1) 10.50.0.0/16, Next-Hop 10.1.1.254. 2) 0.0.0.0/0, Next-Hop 203.0.113.1.
C) Two routes: 1) 0.0.0.0/0, Next-Hop 203.0.113.1, Metric 10. 2) 0.0.0.0/0, Next-Hop 10.1.1.254, Metric 20.
D) One route: 10.50.0.0/16, Next-Hop 10.1.1.254. A dynamic routing protocol must be used for the default route.

Correct Answer: B

Explanation:

This question tests the most fundamental principle of IP routing: the most specific route wins. A routing table is a list of destinations, and a router will always choose the route with the longest prefix match (i.e., the most specific) for any given packet.

Why B) Two routes: 1) 10.50.0.0/16, Next-Hop 10.1.1.254. 2) 0.0.0.0/0, Next-Hop 203.0.113.1 is Correct: This is the standard and correct implementation. When a packet arrives destined for 10.50.1.1, the router consults this table. It matches both routes (10.50.1.1 is part of 10.50.0.0/16, and it is also part of 0.0.0.0/0, which means any). The router then selects the most specific route, which is 10.50.0.0/16. It will forward the packet to 10.1.1.254. When another packet arrives for 8.8.8.8, it only matches one route: the default route 0.0.0.0/0. The router will then forward this packet to 203.0.113.1. This configuration perfectly satisfies all requirements.

Why A) One route: 0.0.0.0/0, Next-Hop 203.0.113.1. One PBF rule: Source 10.1.0.0/16, Dest 10.50.0.0/16, Next-Hop 10.1.1.254 is Incorrect: While this would technically work, it is an improper use of features. Policy-Based Forwarding (PBF) is an override to the routing table, often based on source or application. Using it for simple destination-based routing is overly complex and adds processing overhead. The routing table itself is the correct place to handle this.

Why C) Two routes: 1) 0.0.0.0/0, Next-Hop 203.0.113.1, Metric 10. 2) 0.0.0.0/0, Next-Hop 10.1.1.254, Metric 20 is Incorrect: This configuration is for failover, not for routing to two different destinations. This would result in all traffic going to 203.0.113.1 (because it has the lower metric). The route to 10.1.1.254 would only be used if the primary default route failed (e.g., via path monitoring). It would not correctly route traffic to the 10.50.0.0/16 network.

Why D) One route: 10.50.0.0/16, Next-Hop 10.1.1.254. A dynamic routing protocol must be used for the default route is Incorrect: This is false. A default route can absolutely be a static route, and in most simple ISP-connected environments, it is. There is no requirement for a dynamic routing protocol.

Question 170 

A security administrator is configuring a new Security policy rule that must allow ‘ssh’ but only if it is running on its standard, default port (TCP 22). The rule must explicitly block ‘ssh’ if it is attempted on any non-standard port (e.g., TCP 2222). What values must be configured in the Application and Service columns of this rule?

A) Application: ‘ssh’ | Service: ‘service-tcp-22’
B) Application: ‘ssh’ | Service: ‘application-default’
C) Application: ‘any’ | Service: ‘service-tcp-22’
D) Application: ‘ssh’ | Service: ‘any’

Correct Answer: B

Explanation:

This question is a quintessential test of the power of the application-default keyword. It demonstrates how to build a strict, Zero Trust policy that enforces both the application and its expected port, preventing protocol-masquerading and non-standard port usage.

Why B) Application: ‘ssh’ | Service: ‘application-default’ is Correct: The ‘application-default’ keyword is a special instruction, not a service object. When used in the Service column, it creates a strict dependency. This rule tells the firewall: Match this packet only if the App-ID is ‘ssh’ AND the destination port is the standard, defined port for ‘ssh’ (which is TCP 22). If a user tries to connect to an SSH server on port 2222, App-ID will identify the application as ‘ssh’, but the rule will fail to match because the service (TCP 2222) is not ‘application-default’. The traffic will then fall to a later deny rule, fulfilling the requirement.

Why A) Application: ‘ssh’ | Service: ‘service-tcp-22’ is Incorrect: This is a very common misconfiguration. This rule tells the firewall: Allow traffic if the App-ID is ‘ssh’ OR if the destination port is ‘TCP 22’. The ‘OR’ is the problem. This rule would allow ‘ssh’ on port 22. It would also allow ‘ssh’ on port 2222 (because the App-ID is ‘ssh’). And it would also allow ‘web-browsing’ on port 22 (because the service is ‘TCP 22’). This is a very permissive rule and does not block ‘ssh’ on non-standard ports.

Why C) Application: ‘any’ | Service: ‘service-tcp-22’ is Incorrect: This is a legacy stateful firewall rule. It says allow any application as long as it is on port 22. This is highly insecure, as it would allow malware, BitTorrent, or any other application to tunnel over port 22.

Why D) Application: ‘ssh’ | Service: ‘any’ is Incorrect: This rule would allow the ‘ssh’ application on any port, including TCP 22, TCP 2222, or any other port. This is the exact opposite of the stated requirement, which is to block non-standard ports.

Question 171 

A security engineer is performing a packet capture on the firewall to troubleshoot a connection. The engineer uses the ‘drop’ stage in the capture filter. The resulting capture file shows packets from a specific user’s IP address. What is the definitive conclusion the engineer can draw from this observation?

A) The packets were successfully encrypted and forwarded by the firewall.
B) The packets were dropped by the firewall’s data plane for a specific reason, such as a deny policy.
C) The packets were received by the firewall but were malformed, so they were not processed.
D) The packets were part of a session that was successfully offloaded to the hardware.

Correct Answer: B

Explanation:

This question is about interpreting the results of the firewall’s built-in packet capture tool, which has four distinct stages: ‘receive’, ‘firewall’, ‘transmit’, and ‘drop’. Each stage provides a unique insight into the packet’s journey.

Why B) The packets were dropped by the firewall’s data plane for a specific reason, such as a deny policy is Correct: The ‘drop’ stage is a special capture filter that only logs packets that the firewall’s data plane has actively decided to discard. This is an invaluable troubleshooting tool. If packets appear in this capture, it is a definitive indication that they were not just lost in transit but were actively rejected by the firewall. The reason for the drop could be numerous: they matched a Security policy with an action of deny, they failed a Threat Prevention check (e.g., a virus was found), they failed a zone protection check (e.g., a port scan), or there was no route to the destination.

Why A) The packets were successfully encrypted and forwarded by the firewall is Incorrect: Packets that are successfully forwarded would be seen in the ‘transmit’ stage capture, not the ‘drop’ stage.

Why C) The packets were received by the firewall but were malformed, so they were not processed is Incorrect: Malformed packets or other early-stage ingestion problems might cause a packet to be dropped before it can be logged by the ‘drop’ stage. The ‘drop’ stage typically captures packets that have been successfully ingested and processed by the forwarding logic but were denied by a policy.

Why D) The packets were part of a session that was successfully offloaded to the hardware is Incorrect: Hardware offload (fastpath) is for successfully established and allowed sessions. A ‘drop’ capture shows the opposite: packets that are being rejected.

Question 172 

An administrator needs to configure a User-ID agent to collect IP-to-user mappings from a large, distributed Active Directory environment. The security team has mandated a least privilege model and prohibits the use of Domain Admin accounts. The team also wants to avoid active network scanning (probing) of user workstations, as it triggers endpoint security alerts. Which User-ID method provides the most secure, passive, and scalable solution?

A) Agentless User-ID configured to use WMI probing against all client subnets.
B) Clientless User-ID using a Captive Portal for all internal users.
C) Windows-based User-ID agent configured to monitor Domain Controller security event logs.
D) Agentless User-ID configured for server-session monitoring.

Correct Answer: C

Explanation:

This scenario requires a deep understanding of the different User-ID methods and their specific security and operational trade-offs. The key requirements are least privilege, passive (no probing), and scalable.

Why C) Windows-based User-ID agent configured to monitor Domain Controller security event logs is Correct: This is the best-practice solution. A Windows-based agent (or the agentless firewall) can be configured to read the security event logs from Domain Controllers (DCs). When a user logs in, the DC generates a login event (e.g., Event ID 4624). The User-ID agent can read this event and create the IP-to-user mapping. This method is:

Least Privilege: The service account for the agent only needs to be a member of the ‘Event Log Readers’ domain group, a low-privilege, read-only account.

Passive: The agent is passively reading logs from the DCs. It is not actively scanning or probing the 10,000 individual client workstations, which satisfies the no probing requirement.

Scalable: The agent only needs to monitor a handful of DCs, not thousands of clients.

Why A) Agentless User-ID configured to use WMI probing against all client subnets is Incorrect: This directly violates the no probing requirement. WMI probing is an active scan where the firewall connects to each client workstation to ask who is logged in?. This is noisy, generates alerts, and typically requires a service account with local administrator rights on all workstations, which violates least privilege.

B) Clientless User-ID using a Captive Portal for all internal users is Incorrect: Captive Portal is a last resort User-ID method. It is highly intrusive, as it forces users to open a web browser and log in before they can access any network resources. This is a terrible user experience for an internal corporate environment and is not a passive solution.

Why D) Agentless User-ID configured for server-session monitoring is Incorrect: Server-session monitoring is for identifying users on terminal servers (like Citrix or RDS), where many users share a single IP. This is a very specific use case and does not solve the primary problem of identifying users on their individual workstations.

Question 173 

A Palo Alto Networks firewall is configured with an Anti-Spyware profile that enables the DNS Sinkhole feature. A workstation in the ‘Trust’ zone becomes infected with malware and attempts to resolve a known command-and-control (C2) domain, malware.example.bad. What is the precise action the firewall takes to identify and mitigate this threat?

A) The firewall drops the user’s outbound DNS request, and the workstation receives a timeout.
B) The firewall allows the DNS request to go to the real DNS server, but blocks the subsequent malicious C2 traffic.
C) The firewall’s DNS-proxy intercepts the request and sends a forged response, pointing the user to the malicious IP.
D) The firewall intercepts the DNS reply from the server and replaces the real malicious IP with the configured sinkhole IP.

Correct Answer: D

Explanation:

This question tests the exact, step-by-step mechanism of the DNS Sinkhole, a powerful tool for identifying infected hosts. It is crucial to understand when in the DNS transaction the firewall intervenes.

Why D) The firewall intercepts the DNS reply from the server and replaces the real malicious IP with the configured sinkhole IP is Correct: This is the exact process.

The infected client (10.1.1.50) sends a DNS query for malware.example.bad to its configured DNS server (e.g., 8.8.8.8).

The firewall’s Security policy, which has an Anti-Spyware profile, inspects this query. It does not block the query.

The external DNS server (8.8.8.8) replies with the actual malicious IP (e.g., 6.6.6.6).

The firewall inspects this reply. The Anti-Spyware engine’s DNS Security signatures recognize malware.example.bad as malicious.

The firewall drops the real reply (6.6.6.6) and fabricates a new, forged DNS reply. This new reply tells the client that malware.example.bad resolves to the firewall’s sinkhole IP (e.g., an internal, non-routable IP like 10.254.254.254).

The client, now poisoned, attempts to initiate its C2 traffic to the sinkhole IP. The firewall sees this traffic, logs it, and drops it, giving the administrator a definitive log of the infected client’s IP.

Why A) The firewall drops the user’s outbound DNS request, and the workstation receives a timeout is Incorrect: This is an action called ‘block-dns’ in the profile. While it prevents the C2, it does not identify the infected client. The client just sees a failed DNS query. The purpose of the sinkhole is identification.

Why B) The firewall allows the DNS request to go to the real DNS server, but blocks the subsequent malicious C2 traffic is Incorrect: This is a different, less effective protection. The firewall could do this using C2 signatures, but the DNS Sinkhole feature is designed to prevent that subsequent C2 traffic from ever being attempted by poisoning the DNS lookup itself.

Why C) The firewall’s DNS-proxy intercepts the request and sends a forged response, pointing the user to the malicious IP is Incorrect: This is nonsensical. The firewall would never intentionally send a user to a malicious IP. The goal is to send them to the sinkhole.

Question 174 

An administrator is configuring a GlobalProtect deployment to support two-factor authentication. The company uses Active Directory for primary user passwords and a RADIUS server for one-time passcodes (OTP). The security policy requires that users must successfully authenticate with both sources before being granted VPN access. How must the administrator configure the firewall’s authentication objects?

A) Create two Authentication Profiles (one for LDAP, one for RADIUS) and apply them both to the GlobalProtect Gateway.
B) Create a single Authentication Profile and configure it to use both an LDAP Server Profile and a RADIUS Server Profile.
C) Create an Authentication Sequence that includes the LDAP Authentication Profile and the RADIUS Authentication Profile, then apply the sequence to the Gateway.
D) Create a SAML Authentication Profile that federates with an Identity Provider capable of prompting for both AD and RADIUS.

Correct Answer: C

Explanation:

This scenario requires chaining multiple authentication providers, where a user must pass one check before being presented with the next. The Palo Alto Networks firewall has a specific object designed for this multi-step process.

Why C) Create an Authentication Sequence that includes the LDAP Authentication Profile and the RADIUS Authentication Profile, then apply the sequence to the Gateway is Correct: The Authentication Sequence is the purpose-built feature for this. The workflow is:

Create a Server Profile for LDAP (Active Directory).

Create a Server Profile for RADIUS.

Create an Authentication Profile (e.g., ‘LDAP_Auth’) that uses the LDAP Server Profile.

Create another Authentication Profile (e.g., ‘RADIUS_Auth’) that uses the RADIUS Server Profile.

Create an Authentication Sequence (e.g., ‘MFA_Sequence’) and add both ‘LDAP_Auth’ and ‘RADIUS_Auth’ to it.

Apply this single ‘MFA_Sequence’ object to the GlobalProtect Gateway’s Client Authentication settings. This will cause the Gateway to first prompt for AD credentials, and upon success, prompt for the RADIUS OTP.

Why A) Create two Authentication Profiles (one for LDAP, one for RADIUS) and apply them both to the GlobalProtect Gateway is Incorrect: This is not technically possible. The GlobalProtect Gateway configuration has a single field for an authentication profile. It does not allow an administrator to apply multiple, separate profiles.

Why B) Create a single Authentication Profile and configure it to use both an LDAP Server Profile and a RADIUS Server Profile is Incorrect: This is also not possible. An Authentication Profile can only be linked to a single Server Profile (or a list of servers of the same type for redundancy). It cannot be linked to two different types of servers (LDAP and RADIUS).

Why D) Create a SAML Authentication Profile that federates with an Identity Provider capable of prompting for both AD and RADIUS is Incorrect: While this is a very modern and popular way to achieve MFA (e.g., using Okta, Azure AD, or Duo as a SAML IdP), it is not what the question is describing. The question implies the administrator is configuring the firewall to talk to the AD and RADIUS servers directly, which is what the Authentication Sequence is for.

Question 175 

An administrator has configured a new virtual router named ‘VR-1’. This router has a static default route (0.0.0.0/0) pointing to the ‘Untrust’ zone ISP. A new internal OSPF area is established, and ‘VR-1’ is now learning internal routes from a neighboring router in the ‘Trust’ zone. The administrator needs to ensure that the internal routes learned via OSPF are not advertised out to the ISP. How is this prevented?

A) This is the default behavior; OSPF routes are not advertised to static route peers.
B) Configure a ‘Redistribution Profile’ to filter the OSPF routes from being sent to the ‘Untrust’ zone.
C) Configure the ‘Untrust’ interface as a ‘passive’ OSPF interface.
D) This is not possible; all routes in the virtual router are advertised to all peers.

Correct Answer: A

Explanation:

This question tests the fundamental logic of route advertisement between different routing protocols and static routes. The core concept is that routing protocols form adjacencies to share routes; a static route does not participate in this sharing.

Why A) This is the default behavior; OSPF routes are not advertised to static route peers is Correct: OSPF is a dynamic routing protocol. It forms neighbor adjacencies with other OSPF-speaking routers to exchange link-state advertisements (LSAs). The ISP gateway, which is just the target of a static route, is not an OSPF peer. The firewall’s virtual router has no OSPF adjacency with the ISP and therefore has no mechanism to advertise its OSPF routes to it. The firewall will simply use the static route to forward packets, but it will not share its internal routing table with the ISP. No special configuration is needed to prevent this.

Why B) Configure a ‘Redistribution Profile’ to filter the OSPF routes from being sent to the ‘Untrust’ zone is Incorrect: A Redistribution Profile is used to control the advertisement of routes between different routing protocols (e.g., from OSPF into BGP, or from static into OSPF). Since the ISP is not a dynamic routing peer, there is no protocol to redistribute into, making this configuration irrelevant.

Why C) Configure the ‘Untrust’ interface as a ‘passive’ OSPF interface is Incorrect: A ‘passive’ interface in OSPF is one that is included in the OSPF process (so its network is advertised inward), but it does not send or listen for OSPF ‘Hello’ packets on that interface. This is a good security practice for interfaces where you do not expect to form a neighbor, but it’s not strictly necessary to prevent route advertisement. The core reason no routes are sent is the lack of a peer, as stated in option A.

Why D) This is not possible; all routes in the virtual router are advertised to all peers is Incorrect: This is fundamentally false and would represent a massive security leak. Routers are explicitly designed to control route advertisement through protocol adjacencies and redistribution policies.

Question 176 

A firewall administrator has just committed a change to the Security policy. A user immediately reports that they can no longer access an internal web application. The administrator reviews the Traffic log and sees the user’s traffic is being denied by the default ‘interzone-default’ rule. The administrator had intended for this traffic to be allowed by a new rule, Rule 15. What is the most likely reason for this policy misconfiguration?

A) The new Rule 15 has ‘Log at Session End’ disabled, so it is not visible in the logs.
B) The user’s traffic is matching a ‘deny’ rule placed above Rule 15 in the policy.
C) The commit was not successful and the new Rule 15 does not exist in the running configuration.
D) The ‘interzone-default’ rule has a bug and is taking precedence over Rule 15.

Correct Answer: B

Explanation:

This scenario describes the most common and fundamental error in firewall policy management: a shadowed rule. The Palo Alto Networks firewall evaluates policy in a sequential, top-down, first-match logic.

Why B) The user’s traffic is matching a ‘deny’ rule placed above Rule 15 in the policy is Correct: The ‘interzone-default’ rule is the very last rule in the rulebase, and its job is to deny any traffic that has not been explicitly allowed by a rule above it. The log shows that the user’s traffic is hitting this ‘interzone-default’ rule. This is a definitive indication that the traffic did not match Rule 15, nor did it match any other allow rule. The most probable reason for this is that the traffic did match a different, more general deny rule (e.g., Rule 10: Deny all traffic from Guest zone to Server zone) that was positioned before the administrator’s new, specific allow rule (Rule 15). The firewall found its first match at Rule 10, applied the deny action, and stopped processing.

Why A) The new Rule 15 has ‘Log at Session End’ disabled, so it is not visible in the logs is Incorrect: If the rule was being matched, the traffic would be allowed, and the user would not be reporting an outage. The logging setting does not affect the allow or deny action.

Why C) The commit was not successful and the new Rule 15 does not exist in the running configuration is Incorrect: While a failed commit would cause this, the log showing the traffic hitting the default ‘deny’ rule points more specifically to a policy logic problem, not a commit failure. The user’s complaint is that their traffic was working and now is not, which implies the commit was successful and introduced a new, flawed logic. Option B is the more direct cause of a shadowing problem.

Why D) The ‘interzone-default’ rule has a bug and is taking precedence over Rule 15 is Incorrect: The ‘interzone-default’ rule cannot take precedence over an explicit rule. It is, by definition, the rule of last resort. It only processes traffic that has not matched any rule above it.

Question 177 

A security engineer is analyzing the ‘Data Filtering’ log and observes numerous entries. Each log entry indicates that an ‘Email’ file type was detected and blocked. The user associated with the log reports they were not sending emails, but rather uploading a Microsoft Word document to a personal web-storage site. The administrator has a File Blocking profile that is set to block the ‘Email’ file type. What is the most likely explanation for this log entry?

A) The user is lying and was using a webmail client to send an email.
B) The Data Filtering profile is misconfigured and is logging all traffic as ‘Email’.
C) The File Blocking profile is inspecting the file’s metadata (magic number) and has identified it as an email message file, such as a .msg or .eml file.
D) The Microsoft Word document contained a malicious macro that was identified by the Data Filtering engine.

Correct Answer: C

Explanation:

This question tests the magic number or file-type identification capability of the File Blocking profile. The firewall does not trust the file extension (e.g., .docx); it looks at the file’s binary header to determine its true file type.

Why C) The File Blocking profile is inspecting the file’s metadata (magic number) and has identified it as an email message file, such as a .msg or .eml file is Correct: The user was likely uploading a .docx file that had an Outlook .msg file embedded inside of it. Or, the user saved an email from their Outlook client (which saves as a .msg file) and then renamed it to ‘report.docx’ to try and evade detection. The firewall’s File Blocking engine is not fooled by the .docx extension. It opens the file, looks at the first few bytes (the magic number), and correctly identifies its true type as an ‘Email’ file. Since the policy is set to block this file type, the upload is denied, and a log is generated.

Why A) The user is lying and was using a webmail client to send an email is Incorrect: While possible, the scenario is testing a technical concept. If the user was using webmail, the App-ID would likely be ‘gmail’ or ‘office365-webmail’, and the file blocking log would be consistent with that. The fact that the user claims it was a Word doc points to a file-type mismatch.

Why B) The Data Filtering profile is misconfigured and is logging all traffic as ‘Email’ is Incorrect: The log is a File Blocking log (which is what the Data Filtering log view is called), and it’s being triggered by a File Blocking profile. This is not a Data Filtering profile (DLP) issue, which looks for patterns like SSNs.

Why D) The Microsoft Word document contained a malicious macro that was identified by the Data Filtering engine is Incorrect: A malicious macro would be detected by the Antivirus or WildFire profiles. The File Blocking profile is not responsible for detecting malware; it is responsible for blocking files based on their type (e.g., block all ‘exe’, ‘bat’, ‘zip’).

Question 178 

A company has a strict policy that prohibits the use of any peer-to-peer (P2P) applications. An administrator has a Security policy rule that blocks the ‘bittorrent’ application. However, a savvy user is using a BitTorrent client that is configured to use TCP port 443. The user’s traffic is initially identified as ‘ssl’ and is allowed by a general web-access rule. After several packets, App-ID correctly re-identifies the application as ‘bittorrent’. What is this process of re-identification called, and what is the firewall’s action?

A) The process is Application Override; the session is re-evaluated and allowed.
B) The process is Application Shift; the session is re-evaluated against the Security policy and is blocked.
C) The process is Heuristic Analysis; the session is flagged in the Threat log but is not blocked.
D) The process is Port Inspection; the firewall sends a TCP reset to the client for using a non-standard port.

Correct Answer: B

Explanation:

This question tests the core concept of how App-ID handles evasive applications that attempt to masquerade as other protocols. The ability to re-classify a session mid-stream is a defining feature of a Next-Generation Firewall.

Why B) The process is Application Shift; the session is re-evaluated against the Security policy and is blocked is Correct: The term for this re-identification is Application Shift. The firewall initially matches the traffic to the web-access rule based on the port (443) and the initial data (which looks like ‘ssl’). However, App-ID continues to inspect the session. Once enough packets have been analyzed, it gathers sufficient behavioral and signature-based evidence to conclude the application is actually ‘bittorrent’. When this shift occurs, the firewall immediately re-evaluates the session against the entire Security policy rulebase, but this time with the new, correct App-ID. The session will now match the rule to block ‘bittorrent’, and the connection will be terminated.

Why A) The process is Application Override; the session is re-evaluated and allowed is Incorrect: Application Override is a configuration that prevents App-ID from working. It is used to tell the firewall to trust the port and not inspect the traffic. This scenario is describing the opposite: App-ID working correctly.

Why C) The process is Heuristic Analysis; the session is flagged in the Threat log but is not blocked is Incorrect: Heuristic analysis is part of the App-ID engine, but it’s not the name of the process. More importantly, the session is blocked. The purpose of this re-evaluation is to enforce the policy, not just to log the violation.

Why D) The process is Port Inspection; the firewall sends a TCP reset to the client for using a non-standard port is Incorrect: This is not the correct term, and the logic is flawed. The firewall is not blocking the traffic because it’s on a non-standard port; it’s blocking it because it has been identified as ‘bittorrent’, which is an explicitly denied application.

Question 179 

An administrator is designing a Palo Alto Networks solution and needs to insert a firewall transparently into an existing network segment to inspect traffic between a critical server and its gateway router. The insertion must not require any IP address changes to the server or the router. Crucially, the firewall must not participate in Spanning Tree Protocol (STP) or maintain a MAC address table, as this could disrupt the sensitive Layer 2 environment. Which deployment mode must be used?

A) Layer 3
B) Layer 2
C) Virtual Wire
D) Tap

Correct Answer: C

Explanation:

This question differentiates between the two transparent deployment modes (Layer 2 and V-Wire) by focusing on their handling of Layer 2 protocols. This is a critical design consideration for transparent insertions.

Why C) Virtual Wire is Correct: A Virtual Wire (V-Wire) is a true bump-in-the-wire or transparent bridge. It binds two interfaces together and operates at Layer 1, below Layer 2 protocol processing. A V-Wire is completely invisible: it has no MAC address, it does not participate in Spanning Tree, and it does not maintain a MAC address table. It simply passes all frames—including STP BPDUs, VLAN tags, and other L2 protocols—from one port to the other. This packet is still sent to the data plane for full App-ID and Content-ID inspection. This meets the requirement perfectly, as it is a truly invisible insertion.

Why A) Layer 3 is Incorrect: A Layer 3 interface is a router hop. It has an IP address and a MAC address. This is the opposite of a transparent insertion and would require re-IP-addressing the network.

Why B) Layer 2 is Incorrect: A Layer 2 deployment turns the firewall into a transparent switch. Like any switch, it must maintain a MAC address table to know where to forward frames. It also must participate in Spanning Tree Protocol (STP) to prevent network loops. This direct participation in L2 protocols violates the scenario’s strict requirements.

Why D) Tap is Incorrect: A Tap interface is a listen-only, passive interface. It receives a copy of traffic (e.g., from a switch SPAN port) and can be used for visibility. However, it is not in-line and it cannot block or control any traffic, which defeats the purpose of a firewall.

Question 180 

A company has a critical web application hosted in AWS. An administrator has deployed a VM-Series firewall to protect this application. The administrator wants the firewall’s Security policy to be dynamic. When new web servers are deployed in AWS, they are automatically assigned an AWS tag of ‘Prod-Web’. The administrator wants the firewall’s Security policy to automatically include the IP addresses of these new servers without any manual changes. Which object should the administrator use in the destination field of the Security policy rule?

A) An External Dynamic List (EDL)
B) An Address Group with all server IPs manually added.
C) A Dynamic Address Group (DAG)
D) A Service Group

Correct Answer: C

Explanation:

This scenario is a prime example of Infrastructure-as-Code (IaC) and policy automation in a cloud environment. Security policy must be able to adapt to a dynamic and ephemeral infrastructure, where IP addresses are constantly changing.

Why C) A Dynamic Address Group (DAG) is Correct: A Dynamic Address Group (DAG) is the feature built for this exact purpose. It is an Address Object whose membership is not defined by static IP addresses, but by dynamic tags. The administrator configures the firewall (or Panorama) to monitor the cloud environment (like AWS or vCenter) via its API. The administrator then creates a DAG with a match criterion of the AWS tag ‘Prod-Web’. The firewall will then continuously poll AWS and automatically populate this group with the IP addresses of any VM that has that tag. When a new server is launched, it gets the tag, and its IP is automatically added to the DAG and covered by the policy. When it’s terminated, its IP is removed.

Why A) An External Dynamic List (EDL) is Incorrect: An EDL is used to pull a list of IPs or domains from a flat file hosted on an external web server. It is typically used for consuming third-party threat intelligence, not for monitoring an internal cloud environment’s tags.

Why B) An Address Group with all server IPs manually added is Incorrect: This is the static, legacy method. It directly violates the requirement for the policy to be automatic and adapt without manual changes.

Why D) A Service Group is Incorrect: A Service Group is a collection of ports and protocols (e.g., ‘service-http’, ‘service-https’). It has no role in defining IP addresses.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!