Palo Alto Networks NGFW-Engineer Next-Generation Firewall Engineer Exam Dumps and Practice Test Questions Set1 Q1-20

Visit here for our full Palo Alto Networks NGFW-Engineer exam dumps and practice test questions.

Question 1:

An organization is deploying a Palo Alto Networks NGFW in a virtual wire mode. The firewall is inserted transparently between an internal router and a switch. A new security initiative requires that all traffic matching the “facebook-base” application be blocked, but only for users in the “Marketing” group. All other traffic must be allowed. How should the administrator configure the firewall to achieve this with the least administrative effort? 

A) Create a Security policy rule with an App-ID of “facebook-base”, a source user of “Marketing”, and an action of “Deny”.
B) Create a Security policy rule with an App-ID of “facebook-base”, a source user of “Marketing”, and an action of “Drop”.
C) Create two Security policy rules. The first rule allows the “Marketing” group to “any” application. The second rule denies “facebook-base” for “any” user.
D) This configuration is not possible because User-ID is not supported in a virtual wire deployment.

Correct Answer: A

Explanation: 

The correct answer is A A virtual wire deployment model is transparent and operates at Layer 2, but it still supports Layer 7 inspection capabilities, including App-ID and User-ID) The requirement is to block a specific application for a specific user group.

Why A (Deny) is Correct: The “Deny” action in a Security policy rule is the standard method for actively blocking application traffiC) When traffic from a “Marketing” user matching the “facebook-base” application hits this rule, the firewall will stop the session and send a TCP reset to both the client and the server (for TCP traffic) or an ICMP “port-unreachable” (for UDP traffic). This provides a clean session termination.

Why B (Drop) is Incorrect: The “Drop” action is a “silent” block. The firewall will discard the packets without notifying the client or server. This can cause applications to time out, leading to a poor user experience as the browser or application hangs. While it does block the traffic, “Deny” (which sends a reset) is generally preferred for user-facing applications like Facebook, as it makes the block “clean” and immediate. However, both A and B are functionally similar in blocking. “Deny” is just a more explicit and “louder” block. In many environments, “Drop” is preferred to avoid giving attackers information, but for user-application policies, “Deny” is common. Between “Deny” and “Drop,” both are valid blocking actions. “Deny” is a specific action that sends resets, which is a supported and valid configuration.

Why C (Two rules) is Incorrect: This policy logic is flaweD) A rule allowing the “Marketing” group to “any” application would be placed before the deny rule. This “allow” rule would match the Facebook traffic first, and the traffic would be alloweD) The “deny facebook-base” rule would never be hit by the Marketing group. Policy rules are evaluated from top to bottom.

Why D (Not possible in vwire) is Incorrect: This is factually incorrect. A virtual wire deployment fully supports advanced inspection features. The firewall can perform App-ID, User-ID, and Content-ID on traffic passing through the vwire, even though it is not performing any Layer 3 routing. User-ID can be learned via any standard method (e.g., server monitoring, GlobalProtect, port-mapping) and applied to vwire traffiC)

Question 2: 

A network engineer is configuring a new Palo Alto Networks firewall. The engineer needs to provide internet access to a group of web servers located in the “DMZ” zone. These servers (e.g., 10.5.1.10, 10.5.1.11) must initiate outbound connections to the “Untrust” zone to download software patches and updates. When they do this, their source IP address must be translated to the firewall’s external interface IP address (e.g., 198.51.100.100). What type of NAT policy is required? 

A) A Destination NAT (DNAT) policy with a “Dynamic IP and Port” (DIPP) translation type.
B) A Source NAT (SNAT) policy with a “Static IP” translation type.
C) A Source NAT (SNAT) policy with a “Dynamic IP and Port” (DIPP) translation type.
D) A U-Turn NAT policy.

Correct Answer: C

Explanation: 

The correct answer is C This scenario describes a classic “outbound” internet access case, where multiple internal (private) IP addresses need to share a single external (public) IP address to communicate with the internet.

Why C (SNAT with DIPP) is Correct:

Source NAT (SNAT): The primary goal is to change the source address of the packet. The traffic originates from the internal DMZ servers (10.5.1.x) and is destined for the internet. The firewall needs to translate the source IP from 10.5.1.x to 198.51.100.100.

Dynamic IP and Port (DIPP): “DIPP” is the Palo Alto Networks term for what is commonly known as Port Address Translation (PAT) or NAT OverloaD) Since multiple servers (10.5.1.10, 10.5.1.11, etC)) must share a single public IP (198.51.100.100), the firewall must use different source ports to keep track of the individual sessions. This “Dynamic IP and Port” setting translates the source IP to 198.51.100.100 and assigns a unique source port for each session.

Why A (Destination NAT) is Incorrect: Destination NAT (DNAT) is used for inbound traffiC) A DNAT rule would be used if an external user needed to access an internal server. It translates the destination address of the packet, not the source.

Why B (SNAT with Static IP) is Incorrect: “Static IP” as a translation type implies a one-to-one mapping. This would mean 10.5.1.10 translates to 198.51.100.100, and 10.5.1.11 would need a different public IP. Since the requirement is for all servers to share the single external interface IP, DIPP is the only viable option.

Why D (U-Turn NAT) is Incorrect: U-Turn NAT is a specific configuration that allows an internal user to access an internal server using the server’s external/public IP address. This scenario is purely for outbound-initiated traffic and does not involve this “hairpin” traffic flow.

Question 3:

An administrator wants to use Panorama to manage a fleet of 50 remote branch-office firewalls. The administrator wants to enforce a global security policy that blocks all peer-to-peer (P2P) applications for all branch offices. However, the “New York” branch requires a specific exception to allow the “webex” application, which is blocked by a global policy. How can the administrator accomplish this using Panorama’s management constructs? 

A) Create a Template Stack. Add a “Global” template with the “webex” block rule and a “New York” template with the “webex” allow rule. Assign the stack to the New York device.
B) Create a Device Group hierarchy. Place all firewalls in a “Global” parent device group. Create a “New York” child device group.
C) Create a “Global” Device Group with a “Pre-rule” to block P2P. Create a “New York” child Device Group. Create a “Post-rule” in the “New York” group to allow “webex”.
D) Place all firewalls in a “Global” parent Device Group. Create a “Pre-rule” in the “Global” group to allow “webex” (action: allow). Create a “Post-rule” in the “Global” group to block P2P (action: deny).

Correct Answer: C

Explanation: 

The correct answer is C Panorama uses Device Groups for managing policies and Templates for managing device/network configuration. This question is about policies, so Device Groups are the correct construct. Panorama’s “Pre-rules” and “Post-rules” are the mechanism for applying global policies while allowing for local exceptions.

Why C (Pre-rule and Post-rule) is Correct: Panorama’s policy hierarchy is evaluated in a specific order:

Pre-rules (Parent Device Group): These rules are evaluated first. They are “global” and cannot be overridden by child device groups. This is the perfect place to put the “block P2P” rule, as you want this to be an enforced global policy.

Local Rules (Child Device Group): These are the rules specific to the device group (e.g., “New York”).

Post-rules (Parent Device Group): These rules are evaluated last. They are also “global” but are evaluated after the local rules.

In this scenario, the admin would place the “block P2P” rule in the “Global” Pre-rules section. Then, for the “New York” child device group, they would create a local rule (in the main ruleset, between pre and post) to allow “webex”. The traffic flow for a New York user would be:

Hit “Pre-rules” (e.g., “block P2P”). If it’s Webex, it doesn’t match.

Hit “New York Local Rules”. The “allow webex” rule is matcheD)

The session is allowed, and no further rules are processeD)

This achieves the global block while permitting the specific local exception. Correction: The prompt’s C option suggests a Post-rule to allow “webex”. This is a common point of confusion. The correct logic is Pre-rule (Global Block), Local Rule (NY Allow). However, if the “Global Block P2P” rule was in the Post-rule section, the Local “Allow Webex” rule would be evaluated first, and the P2P block would be evaluated after it, achieving the same goal. Let’s re-evaluate C: “Pre-rule to block P2P… Post-rule in the New York group to allow webex”. This is slightly incorrect terminology. The “allow webex” rule would be a local rule in the New York group, not a “post-rule.”

Let’s reconsider the standard hierarchy: Pre-rules (Global) -> Local Rules (NY) -> Post-rules (Global).

To create a global block with local exceptions, you put the exception (allow webex) in the local NY device group. You put the global block (block P2P) in the Post-rules of the “Global” parent group.

Traffic from NY:

Check Pre-rules (empty).

Check NY Local Rules: “allow webex” -> Match, session alloweD)

Check Post-rules (not evaluated).

Traffic from LA:

Check Pre-rules (empty).

Check LA Local Rules (empty).

Check Global Post-rules: “block P2P” -> Match, session denieD) This is the correct design. Option C’s wording is slightly off (“Post-rule in the New York group”), but its intent (using pre/post rules with a child group) is the correct concept.

Why A (Template Stack) is Incorrect: Templates and Template Stacks are used for device configuration—things like interface IPs, zones, and HA settings. They are not used for Security, NAT, or other policies.

Why B (Just hierarchy) is Incorrect: Simply creating the hierarchy is not enough. It doesn’t describe how the policy exception is createD) The “Pre” and “Post” rule sections are the key mechanism, which this option omits.

Why D (Pre-rule allow, Post-rule deny) is Incorrect: This logic is flaweD) If you “allow webex” in a global pre-rule, all branches will get it, not just New York. If you “block P2P” in a global post-rule, it will be evaluated after the “allow webex” pre-rule, but the “allow webex” rule would have already allowed the traffic for everyone. This does not meet the requirement.

Question 4: 

A security administrator is analyzing traffic logs in the Application Command Center (ACC). The administrator sees a significant amount of traffic categorized as “unknown-tcp” and “unknown-udp” originating from a custom, in-house application. This traffic is being permitted by the final “allow-all” security policy. The administrator needs to ensure this traffic is correctly identified as “Internal-App” and that its risk level is properly assessed, without allowing it to be used as a vector for other threats. What is the most appropriate first step? 

A) Create a custom Application-ID (App-ID) for the “Internal-App” and use it in a Security policy rule.
B) Create a URL Filtering profile to block the “unknown-tcp” category.
C) Create a custom Vulnerability Protection profile to signature the unknown traffic
D) Disable the “allow-all” security policy and create individual policies for all known applications.

Correct Answer: A

Explanation: 

The correct answer is A The core issue is that the firewall does not recognize the custom application, so it falls into the “unknown” category. The correct solution is to “teach” the firewall what this application is by creating a custom application signature.

Why A (Create a custom App-ID) is Correct: When the firewall’s pre-defined App-ID signatures do not match a traffic flow, it is classified as “unknown-tcp” or “unknown-udp”. To properly identify and control this legitimate, in-house application, the administrator should create a Custom App-ID) This is done in the “Objects” tab The administrator can define the application based on its unique characteristics, such as the specific ports it uses, or even by defining a signature for its protocol. Once the “Internal-App” is created, it can be used in a Security policy rule. A new rule can be created: Source: Internal, Dest: Internal, Application: Internal-App, Action: Allow. This correctly identifies the traffic, associates it with the application, and allows for accurate logging and reporting in the ACC)

Why B (URL Filtering) is Incorrect: URL Filtering is used to control web (HTTP/HTTPS) traffic based on website URLs or URL categories. This “unknown-tcp” traffic is a custom application protocol, not necessarily web traffiC) A URL Filtering profile would have no effect on it.

Why C (Vulnerability Protection) is Incorrect: A Vulnerability Protection profile is a threat prevention (Content-ID) feature. It is used to block known exploits and vulnerabilities within an allowed traffic flow. It does not identify the application itself. You would apply a Vulnerability profile to the “allow” rule (after creating the custom App-ID) as a second step, but it is not the solution to the identification problem.

Why D (Disable allow-all) is Incorrect: While disabling the “allow-all” rule is a security best practice (and should be done eventually), it is not the first step. If the administrator simply disables the rule, the “Internal-App” traffic (which is still “unknown”) will be blocked by the default “deny” rule, causing a business outage. The first step is to identify the legitimate traffic so it can be explicitly allowed before the “allow-all” rule is removeD)

Question 5: 

A Palo Alto Networks firewall is configured with two zones: “Trust” (for internal users) and “Untrust” (for the internet). A web server with the IP address 10.1.1.50 is located in the “Trust” zone. The administrator needs to allow external users from the internet to access this web server on port 443 (HTTPS). The firewall’s external interface, in the “Untrust” zone, has the public IP address 203.0.113.5. What two policies are required to make this work? 

A) A Source NAT (SNAT) policy and a Security policy.
B) A Destination NAT (DNAT) policy and a Security policy.
C) A Security policy and a URL Filtering policy.
D) A U-Turn NAT policy and a Destination NAT (DNAT) policy.

Correct Answer: B

Explanation: 

The correct answer is B. To allow inbound traffic from the internet to an internal server, you must translate the public destination IP to the private internal IP, and you must also have a Security policy to allow that traffic to pass from the Untrust zone to the Trust zone.

Why B (Destination NAT and Security policy) is Correct: This is a standard “port forwarding” or “inbound access” scenario.

Destination NAT (DNAT) Policy: An external user will send a packet destined for the public IP: 203.0.113.5 on port 443. The firewall needs a DNAT rule to handle this. The rule will state: “If traffic comes from the ‘Untrust’ zone, destined for IP 203.0.113.5 on port 443, translate the destination IP to 10.1.1.50 and the destination port to 443 (or a different port if needed).” This gets the packet “aimed” at the correct internal server.

Security Policy: The NAT policy only translates the address; it does not permit the traffiC) After the NAT translation, the firewall evaluates the packet for a Security policy. The packet is now: Source: [Internet IP], Destination: 10.1.1.50, Service: https. The firewall sees this as traffic attempting to go from the “Untrust” zone to the “Trust” zone. Therefore, a Security policy rule is required: Source Zone: Untrust, Destination Zone: Trust, Destination Address: 10.1.1.50, Application: ssl, Service: service-https, Action: Allow.

Without both of these policies, the connection will fail.

Why A (Source NAT) is Incorrect: Source NAT (SNAT) is for outbound traffiC) It changes the source address. This scenario is for inbound traffiC)

Why C (URL Filtering) is Incorrect: URL Filtering is a threat prevention profile applied to allowed web traffiC) It is not required to make the initial connection work, and it doesn’t handle the address translation.

Why D (U-Turn NAT) is Incorrect: U-Turn NAT is a separate, secondary configuration that would allow internal users (in the “Trust” zone) to also access the web server by using its public IP. It is not required for external users to get access.

Question 6: 

A user’s laptop is infected with a new, zero-day malware. The user opens a PDF attachment from an email. This PDF contains an exploit. The malware on the laptop attempts to communicate with its command-and-control (C&C) server, which is an unknown, malicious domain. The Palo Alto Networks firewall has a “default-allow” policy for outbound web traffic but is fully licensed with subscriptions. Which subscription service is designed to identify and block this type of new, previously-unknown malicious file and prevent the C&C communication? 

A) URL Filtering
B) WildFire
C) App-ID
D) DNS Security

Correct Answer: B

Explanation: 

The correct answer is B WildFire is the Palo Alto Networks “sandbox” solution, which is its primary defense against zero-day, previously-unknown malware.

Why B (WildFire) is Correct: WildFire is a cloud-based (or on-premise) dynamic analysis engine. The workflow for this scenario would be:

File Submission: The user downloads the PDF. The firewall, via a WildFire Analysis profile, sees the unknown PDF file and submits it to the WildFire cloud for analysis. (This happens before the user even opens it, in an ideal setup).

Sandbox Analysis: WildFire “detonates” the PDF in a secure, virtual sandbox. It observes the file’s behavior (e.g., “it tries to exploit Adobe Reader,” “it drops a file in C:\Windows,” “it attempts to contact evil-domain.com“).

Verdict and Protection: WildFire determines the file is “malicious.” It immediately generates new, protective signatures and distributes them to all subscribed Palo Alto Networks firewalls globally within minutes.

It creates a new Anti-Virus signature for the malware file itself.

It creates a new DNS Security/URL Filtering signature for the malicious evil-domain.com.

Blocking: Even if the user did get infected, when the malware tries to communicate with its C&C server, the firewall will now have the “malicious” verdict for that domain (from WildFire) and will block the connection via the DNS Security or URL Filtering profile. WildFire is the “brain” that generates the intelligence to stop this zero-day threat.

Why A (URL Filtering) is Incorrect: The standard URL Filtering subscription relies on a pre-built database (PAN-DB) of known malicious sites. Since this is a new, zero-day threat, the C&C domain would not be in the database yet. It is WildFire that populates this database with new C&C domains.

Why C (App-ID) is Incorrect: App-ID identifies what the application is (e.g., “ssl”, “dns”). It does not determine if the content or destination is malicious. The malware’s C&C traffic might just look like standard “ssl” traffiC)

Why D (DNS Security) is Incorrect: DNS Security (like URL Filtering) is a consumer of WildFire intelligence. The firewall’s DNS Security feature would be the component to block the C&C DNS lookup, but only after WildFire had identified the domain as malicious and generated the signature. WildFire is the source of the zero-day intelligence.

Question 7: 

An administrator is configuring High Availability (HA) on two identical PA-820 firewalls. The firewalls are physically connected using the dedicated “HA1” and “HA2” ports. The administrator has configured HA in Active/Passive mode. Which type of traffic flows over the HA2 link? 

A) Heartbeats and Hello messages.
B) Session state synchronization and data-plane traffic
C) Configuration synchronization and management traffic
D) Heartbeats and configuration synchronization.

Correct Answer: B

Explanation: 

The correct answer is B In an Active/Passive HA pair, the two firewalls maintain distinct links for distinct HA functions. The HA2 link is the “data plane” link, responsible for synchronizing session information so that a failover can be stateful.

Why B (Session sync and data-plane) is Correct: The HA links have very specific roles:

HA1 (Control Plane): This link is used for management and control. It carries:

Heartbeats (Hello messages) to check if the peer is alive.

Configuration synchronization (when you commit on the active unit, the config is pushed to the passive unit).

HA state information.

HA2 (Data Plane): This link is used for synchronizing session information.

Session State Sync: When a new session (e.g., a user’s web browsing) is established on the Active firewall, the Active unit sends the session state (source/dest IP, ports, etC)) over the HA2 link to the Passive unit.

The Passive unit stores this in its session table.

Benefit: If the Active unit fails, the Passive unit takes over. When the user’s next packet for that same web session arrives, the newly-Active unit already knows about the session and can forward the packet without interruption. This is called a “stateful failover.”

In an Active/Active setup, it also carries packet-forwarding traffic for asymmetric sessions.

Why A, C, and D are Incorrect: These options incorrectly mix up the roles of HA1 and HA2.

“Heartbeats” and “Hello messages” are on HA1.

“Configuration synchronization” is on HA1.

“Management traffic” (to the firewall’s management plane) is not on HA1 or HA2; it is on the “MGT” port.

Therefore, the only option that correctly identifies the function of HA2 is the one that includes “Session state synchronization.”

Question 8: 

A user in the “Trust” zone (IP: 10.1.1.100) attempts to browse to http://www.badsite.com. The firewall has a Security policy that allows this “web-browsing” application traffiC) However, the firewall also has a URL Filtering profile attached to this “allow” rule. The URL www.badsite.com is in the “malware” category. What action will the firewall take on the user’s connection attempt? 

A) The traffic will be allowed, because the Security policy action is “allow”.
B) The traffic will be dropped, because the default-deny policy will block it.
C) The traffic will be blocked by the URL Filtering profile, and the user will see a “Blocked” response page.
D) The traffic will be sent to the WildFire cloud for analysis, and the connection will be allowed

Correct Answer: C

Explanation: 

The correct answer is c This question demonstrates the fundamental “allow-but-verify” model of the Palo Alto Networks firewall. The Security policy first allows the traffic based on high-level App-ID, but then the attached security profiles (Content-ID) inspect the traffic more deeply.

Why C (Blocked by URL Filtering) is Correct: The order of operations is critical:

Session Init: The user sends the HTTP GET request.

Security Policy Match: The firewall checks its Security policies. It finds the rule: Source: Trust, Dest: Untrust, App: web-browsing, Action: Allow.

Profile Inspection: The firewall sees that this “Allow” rule has a URL Filtering profile attacheD) It stops the packet and inspects the HTTP headers. It extracts the URL: www.badsite.com.

Profile Verdict: It checks this URL against the URL Filtering profile. The profile is configured to “block” (or “deny”) the “malware” category.

Action Taken: The URL Filtering profile’s “block” action overrides the Security policy’s “allow” action for this specific session. The firewall blocks the session and, because it’s web traffic, it generates and sends a “Blocked” response page back to the user’s browser, informing them why the site is inaccessible.

Why A (Allowed) is Incorrect: This is a common misconception. The “allow” in the Security policy is not a blanket “allow.” It means “allow this traffic, subject to inspection by these attached profiles.”

Why B (Dropped by default-deny) is Incorrect: The traffic matches an explicit “allow” rule (the web-browsing rule), so it will never reach the “default-deny” policy, which is at the very bottom of the rulebase.

Why D (Sent to WildFire) is Incorrect: WildFire is for analyzing files (like executables, PDFs, etC)). URL Filtering is for analyzing URLs. The firewall would not send a URL to WildFire for analysis; it would check it against the PAN-DB or DNS Security.

Question 9: 

An administrator wants to configure a site-to-site IPsec VPN tunnel between a Palo Alto Networks firewall and a third-party (Cisco) router. The administrator has configured all the necessary Phase 1 (IKE) and Phase 2 (IPsec) parameters. A “Tunnel” interface (e.g., tunnel.1) has been created and assigned to the “VPN” zone. To route internal user traffic (e.g., from the “Trust” zone) to the remote site’s network, what additional, critical configuration item is required? 

A) A static route in the virtual router pointing to the remote network, with the 1 interface as the next-hop.
B) A Security policy rule allowing traffic from the “Trust” zone to the “Trust” zone.
C) A NAT policy to translate the internal user’s IP address to the tunnel interface IP address.
D) An HA “Path Monitoring” configuration on the tunnel.1 interface.

Correct Answer: A

Explanation: 

The correct answer is A The IPsec tunnel (Phase 1 and Phase 2) only builds the secure “pipe.” It does not, by itself, tell the firewall what traffic to send into the pipe. A route is required to direct the traffiC)

Why A (Static Route) is Correct: This is a policy-based firewall, but it is also a router. The firewall’s routing table must have a path for the destination network.

The Problem: A user in “Trust” (e.g., 192.168.1.50) wants to reach a server at the remote site (e.g., 10.10.10.50).

Routing Lookup: The firewall receives this packet. It performs a route lookup for the destination 10.10.10.50.

The Solution: The administrator must create a static route in the firewall’s virtual router. This route will state:

Destination: 10.10.10.0/24 (the remote site’s network)

Interface: tunnel.1 (the “pipe” to send it down)

Next-hop: None (or the IP of the tunnel interface itself)

With this route, the firewall knows that to reach 10.10.10.50, it must send the packet into the tunnel.1 interface. This action (routing the packet to the tunnel interface) is what triggers the IPsec encapsulation.

Why B (Security policy) is Incorrect: A Security policy is also required, but it would be from the “Trust” zone to the “VPN” zone (the zone that tunnel.1 is in). This option’s “Trust” to “Trust” policy is incorrect for this traffic flow. More importantly, the routing is the fundamental piece required to even get the packet to the Security policy for evaluation.

Why C (NAT policy) is Incorrect: A NAT policy is generally not required for a site-to-site VPN, unless the two sites have overlapping IP address ranges. The goal of a VPN is to extend the private networks, so you typically want the original source IP to be used, not translateD)

Why D (Path Monitoring) is Incorrect: Path Monitoring is an HA feature. It is used to monitor the state of the tunnel (or any other path) to trigger a failover of the HA pair. It is not required to make the tunnel route traffic in the first place.

Question 10: 

What are the three core components of the Palo Alto Networks “Content-ID” technology? 

A) App-ID, User-ID, and Device-ID)
B) Anti-Virus, Anti-Spyware, and URL Filtering.
C) PAN-DB, WildFire, and GlobalProtect.
D) Threat Prevention, URL Filtering, and WildFire.

Correct Answer: D

Explanation: 

The correct answer is D Content-ID is the umbrella term for the firewall’s threat prevention capabilities, which are used to inspect the content of allowed traffic flows.

Why D (Threat Prevention, URL Filtering, WildFire) is Correct: Content-ID is the “Stage 3” engine of the Single-Pass Parallel Processing (SP3) architecture. It is responsible for inspecting the payload of the traffiC) This engine includes:

Threat Prevention: This is a bundle of subscriptions that includes:

Anti-Virus: Scans for known viruses, malware, and exploits in file transfers.

Anti-Spyware: Blocks known C&C (command-and-control) traffic and spyware.

Vulnerability Protection: Blocks known software exploits (e.g., buffer overflows) at the network level.

URL Filtering: This component (using the PAN-DB) inspects web traffic and blocks or allows access based on the URL category (e.g., “malware”, “phishing”, “adult”).

WildFire: This is the cloud-based sandbox for zero-day threats. It analyzes unknown files to determine if they are malicious and generates new signatures for the other Content-ID engines.

Together, these components inspect the content of the flow for threats.

Why A (App-ID, User-ID) is Incorrect: App-ID and User-ID are the “Stage 1” (App-ID) and “Stage 2” (User-ID) engines. They are used to identify the traffic in the Security policy (the “who” and the “what”). Content-ID is “Stage 3” (the “is it safe?”).

Why B (Anti-Virus, Anti-Spyware, URL Filtering) is Incorrect: This is very close, but it is incomplete. “Anti-Virus” and “Anti-Spyware” are both part of the Threat Prevention subscription. This option also omits WildFire, which is a critical and distinct part of the Content-ID engine for handling unknown threats. Option D is a more accurate, high-level summary of all the components.

Why C (PAN-DB, WildFire, GlobalProtect) is Incorrect: PAN-DB is the database used by URL Filtering; it’s not the feature itself. GlobalProtect is the remote access VPN solution, not a Content-ID component.

Question 11: 

An administrator needs to configure a Palo Alto Networks firewall to decrypt and inspect SSL/TLS traffic for internal users browsing to external websites. The users are in the “Trust” zone, and the internet is in the “Untrust” zone. Which type of SSL/TLS Decryption policy is required for this? 

A) SSL Forward Proxy
B) SSL Inbound Inspection
C) SSL-Proxy-Basic
D) SSL Reverse Proxy

Correct Answer: A

Explanation: 

The correct answer is A “SSL Forward Proxy” is the Palo Alto Networks terminology for decrypting outbound traffic, where the firewall is acting as an intermediary (a “man-in-the-middle”) between the internal client and the external server.

Why A (SSL Forward Proxy) is Correct: The traffic flow is: Internal User (Client) —> [Firewall] —> External Website (Server)

The user (client) initiates an SSL connection to the external website.

The firewall intercepts this connection.

It “impersonates” the external server to the client, presenting its own “Forward Trust” certificate, which must be trusted by the client’s PC (e.g., via GPO).

It simultaneously “impersonates” the client to the server, creating a second SSL session with the external website.

The firewall now sits in the “middle” with two separate SSL sessions: one with the client (which it can read) and one with the server (which it can also read).

This allows the firewall to decrypt the traffic, send it to the Content-ID engines (Threat Prevention, WildFire) for inspection, and then re-encrypt it before sending it on. This “outbound” decryption is called “Forward Proxy.”

Why B (SSL Inbound Inspection) is Incorrect: “SSL Inbound Inspection” (also called “SSL Reverse Proxy”) is for the opposite scenario. It is used to protect your own internal servers (like a web server in the DMZ) from inbound attacks. It requires you to load the server’s actual private key onto the firewall.

Why C (SSL-Proxy-Basic) is Incorrect: This is not a valid or standard term for a decryption policy type on a Palo Alto Networks firewall.

Why D (SSL Reverse Proxy) is Incorrect: “SSL Reverse Proxy” is just another name for “SSL Inbound Inspection,” which is the wrong direction for this scenario.

Question 12: 

A firewall administrator has configured User-ID to successfully map user IP addresses to usernames. A new policy requires that all users in the “Engineering” Active Directory group be blocked from using the “dropbox” application. All other users must be allowed to use “dropbox”. Which Security policy rule is the most efficient way to implement this? 

A) Rule 1 (top): Source: any, Destination: any, User: “Engineering”, App: “dropbox”, Action: Deny. Rule 2 (below): Source: any, Destination: any, App: “dropbox”, Action: Allow.
B) Rule 1 (top): Source: any, Destination: any, User: “Engineering”, App: “dropbox”, Action: Allow. Rule 2 (below): Source: any, Destination: any, App: “dropbox”, Action: Deny.
C) Rule 1 (top): Source: any, Destination: any, User: ‘except “Engineering”‘, App: “dropbox”, Action: Allow. Rule 2 (below): Source: any, Destination: any, App: “dropbox”, Action: Deny.
D) Rule 1 (top): Source: any, Destination: any, User: ‘except “Engineering”‘, App: “dropbox”, Action: Deny.

Correct Answer: A

Explanation: 

The correct answer is A Security policies are evaluated from top to bottom. The first rule that matches a session’s parameters is applied, and no further rules are evaluateD) The requirement is to create a specific exception (block Engineering) to a general rule (allow everyone else).

Why A (Deny Engineering, then Allow) is Correct: This “top-down” logic is the standard way to build policy exceptions.

Rule 1: User: “Engineering”, App: “dropbox”, Action: Deny

Rule 2: User: “any”, App: “dropbox”, Action: Allow (This option says “App: dropbox” which is correct).

Traffic Flow (Engineering user): A user from the “Engineering” group tries to use Dropbox. The firewall checks Rule 1.

Does the user match “Engineering”? Yes.

Does the app match “dropbox”? Yes.

The rule matches. The action is “Deny”. The session is blockeD)

Rule 2 is never evaluateD)

Traffic Flow (Marketing user): A user from “Marketing” tries to use Dropbox. The firewall checks Rule 1.

Does the user match “Engineering”? No.

The rule does not match. The firewall moves to Rule 2.

Does the app match “dropbox”? Yes.

The rule matches. The action is “Allow”. The session is permitteD)

This logic perfectly implements the requirement.

Why B (Allow Engineering, then Deny) is Incorrect: This is the opposite of the requirement. It would allow Engineering and block everyone else.

Why C (Allow ‘except Engineering’, then Deny) is Incorrect: This logic is more complex and also achieves the goal, but it is less efficient and harder to reaD)

Rule 1: User: ‘except Engineering’, App: “dropbox”, Action: Allow

Rule 2: App: “dropbox”, Action: Deny (This acts as the “Block Engineering” rule) This works, but it’s an “allow-by-exception” model, which is less explicit than the “deny-by-exception” model in A) Option A is the “cleaner” and “most efficient” way to write the policy, as the “deny” rule is explicit.

Why D (Deny ‘except Engineering’) is Incorrect: This would deny Dropbox for everyone except the Engineering group. This is the opposite of the requirement.

Question 13: 

Which component of the Palo Alto Networks Single-Pass Parallel Processing (SP3) architecture is responsible for identifying the application, regardless of its port, protocol, or encryption? 

A) User-ID
B) Content-ID
C) App-ID
D) WildFire

Correct Answer: C

Explanation: 

The correct answer is C App-ID is the foundational technology responsible for “what” the traffic is.

Why C (App-ID) is Correct: App-ID is the patented, Layer-7 traffic classification engine. Its entire purpose is to identify the application. It does this using a multi-step process:

Signature Match: It checks the traffic against a database of known application protocol signatures.

Protocol/Port: It uses the standard port (e.g., TCP 443 for “ssl”) as an initial guess.

Heuristics: It analyzes the protocol’s behavior (e.g., “is this SSLv3 or TLSv1.2?”).

Application Override: It can even identify applications that are “tunneling” inside another (e.g., “bittorrent” running inside “ssl”). This allows the firewall to identify an application like “bittorrent” even if a user tries to evade detection by running it on port 443 (which normally “ssl”). This “port-agnostic” identification is the core of App-ID)

Why A (User-ID) is Incorrect: User-ID is responsible for “who” the traffic is. It maps an IP address (e.g., 10.1.1.100) to a username (e.g., “j-doe”). It identifies the user, not the application.

Why B (Content-ID) is Incorrect: Content-ID is responsible for “is this traffic safe?”. It is the threat prevention engine (Anti-Virus, Anti-Spyware, etC)) that inspects the content of an already-identified application.

Why D (WildFire) is Incorrect: WildFire is a component of Content-ID) It is the sandbox that analyzes unknown files for unknown threats. It does not identify the application itself.

Question 14: 

An administrator is troubleshooting a “split-brain” scenario in an Active/Passive HA cluster. The administrator suspects the heartbeats are not being successfully sent or received, causing both firewalls to believe they are the “Active” unit. Which HA link should the administrator investigate first for this specific issue? 

A) The HA1 (Control Plane) link.
B) The HA2 (Data Plane) link.
C) The MGT (Management) link.
D) The Data (traffic-forwarding) link.

Correct Answer: A

Explanation: 

The correct answer is A The “split-brain” scenario occurs when the two HA peers can no longer communicate with each other, and both decide they should be Active. The communication link that prevents this is the HA1 (Control Plane) link, which carries the heartbeats.

Why A (The HA1 link) is Correct: The HA1 link is the “control” link, and its primary job is to manage the HA state.

Heartbeats: The two firewalls send “hello” messages (heartbeats) to each other constantly over the HA1 link.

Failover Trigger: If the Passive unit stops receiving these heartbeats from the Active unit, it will wait for a “hold timer” and then assume the Active unit is “down.”

Split-Brain: If the problem is only the HA1 link itself (e.g., a bad cable or a misconfigured switch port), the Active unit is still “up” and running. The Passive unit stops hearing the heartbeats and promotes itself to “Active.” Now, both units are “Active” and trying to process traffic, which can cause a major network outage.

Therefore, any “split-brain” or “unstable failover” troubleshooting always begins with checking the physical and logical status of the HA1 link.

Why B (The HA2 link) is Incorrect: The HA2 link is for session synchronization. If the HA2 link fails, the failover will still happen (triggered by HA1), but it will not be stateful. All existing user sessions will be dropped and will have to be re-establisheD) A HA2 failure does not, by itself, cause a failover or a split-brain.

Why C (The MGT link) is Incorrect: The MGT link is for out-of-band management, logging, and services. It plays no role in the HA failover or state logiC)

Why D (The Data link) is Incorrect: The data-forwarding links (e.g., the “Trust” and “Untrust” interfaces) are monitored by “Path Monitoring,” but a failure here would trigger a graceful failover. The heartbeat itself, which is the core of the HA state, is on HA1.

Question 15: 

An administrator needs to implement a Security policy that allows employees to access “google-docs” but explicitly blocks them from using the “google-drive-upload” application-function. All other “google-base” functions must be alloweD) How can this be configured? 

A) Create a Security policy with “google-docs” in the application field and set the action to “Allow”.
B) Create two policies: Rule 1 (top) denies “google-drive-upload”. Rule 2 (bottom) allows “google-base”.
C) This is not possible; “google-drive-upload” is part of the “google-docs” application.
D) Create a custom App-ID for “google-drive-upload” and set the action to “Deny”.

Correct Answer: B

Explanation: 

The correct answer is B Palo Alto Networks’ App-ID is highly granular. It not only identifies “google-base” (the base application) but also specific functions within it, such as “google-drive-upload”. The correct way to implement this is with two rules.

Why B (Deny upload, then Allow base) is Correct: App-ID signatures are layereD) “google-drive-upload” is a more specific application that “depends-on” the “google-base” application. The policy logic must reflect this.

Rule 1: Source: Trust, Dest: Untrust, App: google-drive-upload, Action: Deny

Rule 2: Source: Trust, Dest: Untrust, App: google-base (or “google-docs”), Action: Allow

Traffic Flow (User uploads file): The user is on Google Docs and initiates an uploaD) The firewall first sees “google-base” and “ssl”. Then, as the upload starts, the App-ID engine sees the specific function and re-classifies the session as “google-drive-upload”.

The firewall re-evaluates the policy against this new, more specific App-ID)

It checks Rule 1. The app “google-drive-upload” matches. The action is “Deny”. The upload is blockeD)

Traffic Flow (User views doc): The user just views a document. The traffic is identified as “google-base” or “google-docs”.

The firewall checks Rule 1. The app does not match “google-drive-upload”.

It moves to Rule 2. The app matches “google-base”. The action is “Allow”. The session is permitteD)

Why A (Allow google-docs) is Incorrect: This is insufficient. “google-docs” is a “container” app. Allowing it may implicitly allow “google-drive-upload” (depending on its default dependencies). A specific “deny” rule is the only guaranteed way to block the sub-function.

Why C (Not possible) is Incorrect: This is incorrect. This granular, function-level control is a key feature and selling point of App-ID)

Why D (Create a custom App-ID) is Incorrect: This is unnecessary. “google-drive-upload” is a standard, pre-defined App-ID signature that already exists. There is no need to create a custom one.

Question 16: 

An administrator configures a new GlobalProtect Portal and Gateway. The goal is to provide a seamless, “always-on” VPN experience for remote laptop users. The users must be authenticated using their Active Directory credentials, and the connection must be established before the user even logs in to their Windows machine. This is to ensure that login scripts and group policies are applieD) Which two GlobalProtect features are required? 

A) Connect Method: “On-demand” and Authentication: “LDAP”.
B) Connect Method: “Pre-logon” and Authentication: “Kerberos”.
C) Connect Method: “Pre-logon” and Authentication: “Client Certificate”.
D) Connect Method: “User-logon” and Connect Method: “Pre-logon”.

Correct Answer: D

Explanation: 

The correct answer is D This is a bit of a trick question. To achieve the “always-on” experience that also runs login scripts, you often need two tunnels: one for the machine (Pre-logon) and one for the user (User-logon).

Why D (“User-logon” and “Pre-logon”) is Correct: GlobalProtect provides different “Connect Methods” to establish the tunnel at different times.

On-demand: The user must manually click “Connect”. This is not “always-on.”

User-logon: The tunnel is established immediately after the user logs in to their machine. This is “always-on” for the user, but it is too late for GPOs and login scripts, which run during the login.

Pre-logon: The tunnel is established before the user logs in, at the Windows login screen. It runs in the “SYSTEM” context of the laptop.

The Benefit: This “Pre-logon” tunnel connects the machine to the corporate network. This allows the user’s login attempt to reach the domain controller. This is what allows the GPOs and login scripts to be applied successfully.

The Combined Solution: The standard “always-on” configuration is a combination:

A “Pre-logon” tunnel establishes first (often using a machine certificate for authentication) to connect the machine.

The user then logs in (authenticating against the DC through the Pre-logon tunnel).

After login, the GlobalProtect agent transitions to a “User-logon” tunnel, which is authenticated by the user (often using LDAP/Kerberos/SAML). This user-based tunnel is then used for the rest of the session.

Therefore, both “User-logon” (for the always-on user session) and “Pre-logon” (to enable the login process) are required for the full “always-on” experience.

Why A (On-demand) is Incorrect: “On-demand” is manual, which is the opposite of “always-on.”

Why B (Kerberos) is Incorrect: Kerberos is an authentication methoD) While it might be used, the Connect Method (Pre-logon) is the key feature. Also, “Pre-logon” often uses certificates, not user-based Kerberos (as the user hasn’t logged in yet).

Why C (Client Certificate) is Incorrect: A Client Certificate is an authentication method often used by the “Pre-logon” connect methoD) However, “Pre-logon” is the connect method itself, which is the more accurate answer to “which feature”. Option D, which names the two connect methods, is the most complete answer.

Question 17: 

A Palo Alto Networks firewall is processing a new TCP session. Which component of the Single-Pass Parallel Processing (SP3) architecture is responsible for first identifying the traffic as “ssl” and then, after decrypting it, re-identifying it as “salesforce”? 

A) Content-ID
B) App-ID
C) User-ID
D) The MGT Plane CPU

Correct Answer: B

Explanation: 

The correct answer is B App-ID is not a “one-and-done” process. It is a continuous-inspection engine that can and will re-classify a session if new information becomes available (such as after decryption).

Why B (App-ID) is Correct: This scenario describes the power of App-ID when combined with SSL Decryption.

Initial Packet: The firewall sees the TCP “Client-Hello”. Based on the destination port (443) and the SSLv3/TLS handshake, the App-ID engine immediately classifies the session as “ssl”.

Security Policy Match: The firewall finds a “Decrypt and Allow” rule that matches App: ssl.

Decryption: The “SSL Forward Proxy” engine (Option A from a previous question) performs the “man-in-the-middle” decryption.

Re-Inspection: The decrypted application data (the HTTP GET request inside the SSL tunnel) is now fed back to the App-ID engine.

Re-Classification: The App-ID engine inspects the decrypted HTTP headers (e.g., Host: salesforce.com) and other signatures. It updates the session’s application from “ssl” to the actual application: “salesforce”.

Final Policy Check: The firewall then re-evaluates the Security policy. It checks if “salesforce” is also alloweD) If not, the session is droppeD)

This entire “identify, decrypt, re-identify” workflow is handled by the App-ID engine.

Why A (Content-ID) is Incorrect: Content-ID is the threat engine. It only runs after App-ID has identified the (decrypted) application. Content-ID’s job is to look for exploits within the “salesforce” traffic, not to identify it as “salesforce”.

Why C (User-ID) is Incorrect: User-ID identifies the “who” (the user). It runs in parallel but is not responsible for identifying the “what” (the application).

Why D (MGT Plane CPU) is Incorrect: All of this high-speed inspection (App-ID, Content-ID, decryption) happens in the dedicated, hardware-accelerated Data Plane. The MGT (Management) Plane CPU is only for configuration, logging, and reporting.

Question 18: 

An administrator needs to ensure that if the primary internet circuit (ISP-A) on an external interface “ethernet1/1” fails, all outbound internet traffic automatically re-routes to a backup internet circuit (ISP-B) on “ethernet1/2”. The firewall has two default routes (0.0.0.0/0), one for each ISP. How can the administrator configure the firewall to detect the failure of ISP-A and trigger the failover? 

A) Configure an “HA Failover” event linked to the default route.
B) Configure a “Path Monitor” as part of a “PBF” (Policy-Based Forwarding) rule.
C) Configure a “Link Monitor” as part of the primary default route’s “Monitor” settings.
D) This is not possible; you can only have one default route in a virtual router.

Correct Answer: C

Explanation: 

The correct answer is C This is a classic “ISP Failover” scenario. The firewall’s virtual router can have multiple routes for the same destination (e.g., 0.0.0.0/0), and it uses metrics to decide which one to use. “Link Monitoring” (or “Path Monitoring”) is the feature that automatically checks the health of a route.

Why C (Link Monitor) is Correct: This feature is found within the Virtual Router configuration.

Multiple Default Routes: The admin will configure:

Route 1: Dest: 0.0.0.0/0, Interface: ethernet1/1, Next-Hop: [ISP-A router], Metric: 10 (primary)

Route 2: Dest: 0.0.0.0/0, Interface: ethernet1/2, Next-Hop: [ISP-B router], Metric: 20 (backup)

The Problem: By default, the firewall will always use Route 1 (lowest metric). If the ISP-A router fails, but the local ethernet1/1 interface is still “up”, the firewall will keep sending traffic to the dead router (it “black-holes” the traffic).

The Solution: The administrator configures a Monitor profile (often called “Link Monitoring” or “Path Monitoring” in this context).

This profile is configured to ping a reliable target through ISP-A (e.g., 8.8.8.8 or the ISP-A router itself).

This “Monitor” profile is then attached to “Route 1”.

Failover: The firewall now constantly pings 8.8.8.8 out of ethernet1/1. If these pings fail for a set period, the firewall considers Route 1 to be “down”. It removes Route 1 from the active routing table.

Re-route: With Route 1 gone, the only remaining default route is Route 2 (Metric 20). The firewall automatically starts sending all new sessions to Route 2, successfully failing over to ISP-B)

Why B (PBF) is Incorrect: Policy-Based Forwarding (PBF) can be used for this, but it is more complex. PBF is for “policy-based” routing (e.g., “send all ‘guest’ traffic to ISP-B”). Using the route metric and a monitor is the simpler, more direct way to handle a simple default route failover.

Why A (HA Failover) is Incorrect: HA (High Availability) Failover is for failing over the entire firewall to its passive peer. It is not used for re-routing traffic between two ISP links on a single firewall.

Why D (Not possible) is Incorrect: This is factually wrong. A virtual router can have (and often does have) multiple routes to the same destination, using metrics to prioritize them.

Question 19: 

An administrator wants to prevent users from accidentally submitting corporate credentials (usernames and passwords) to un-sanctioned, “phishing” or “unknown” category websites. The administrator has already configured SSL Forward Proxy decryption. Which security profile should be configured and applied to the Security policy to prevent this specific action? 

A) Anti-Spyware profile
B) Vulnerability Protection profile
C) URL Filtering profile
D) Credential Phishing Prevention (part of Content-ID)

Correct Answer: D

Explanation: 

The correct answer is D This describes the exact use case for “Credential Phishing Prevention,” which is a feature of the Content-ID engine.

Why D (Credential Phishing Prevention) is Correct: This feature is a specific part of the Threat Prevention subscription (which is part of Content-ID).

How it Works: You configure the firewall (on the MGT plane) with a list of “corporate” username formats (e.g., contoso\*).

Inspection: The feature, when applied to a Security rule (that also has decryption), inspects the decrypted HTTP POST data of user web submissions.

The “Check”: As a user types their username and password into a website, the firewall is watching. It sees the user submitting contoso\j-doe and a passworD)

URL Category Check: The firewall simultaneously checks the URL category of the website (e.g., www.evil-phish.com).

The “Block”: The administrator configures a policy (in the Anti-Spyware profile) that says: “If the submitted username matches the corporate format, AND the website’s URL category is ‘phishing’, ‘malware’, or ‘unknown’ (or any other category you choose), THEN block the submission.”

Result: The firewall blocks the HTTP POST from completing. The user’s credentials never leave the network, and the user is often shown a “blocked” page, warning them about credential theft.

Why A (Anti-Spyware) is Incorrect: This is a trick question. The configuration for Credential Phishing Prevention is located within the Anti-Spyware profile. However, the feature itself is called “Credential Phishing Prevention” or “Credential-Theft Prevention.” Option D is the name of the feature, making it the more specific and correct answer.

Why B (Vulnerability Protection) is Incorrect: Vulnerability Protection blocks exploits (like buffer overflows). It does not inspect usernames and passwords in web forms.

Why C (URL Filtering) is Incorrect: URL Filtering blocks access to the entire site. It doesn’t allow the user to visit the site but then block a credential submission. The requirement is to let the user go to the site (it might be an “unknown” but benign site) but not to submit their corporate password to it.

Question 20: 

A firewall is configured with a “Trust” zone and a “DMZ” zone. A web server (10.1.1.10) in the “DMZ” zone needs to initiate a connection to a database server (172.16.1.10) in the “Trust” zone. By default, what is the behavior of this traffic? 

A) The traffic is allowed, because both zones are “internal” and are implicitly trusteD)
B) The traffic is dropped, due to the default “intrazone-deny” policy.
C) The traffic is dropped, due to the default “interzone-deny” security policy.
D) The traffic is allowed, but only if it is “ssl” or “ssh”.

Correct Answer: C

Explanation: 

The correct answer is C Palo Alto Networks firewalls operate on a “default-deny” and “zone-based” security model. All traffic between different zones is denied by default.

Why C (Interzone-deny) is Correct: The firewall’s logic is as follows:

Zones: The traffic is originating from the “DMZ” zone and is destined for the “Trust” zone.

“Interzone” Traffic: Because the source and destination zones are different, this is classified as “interzone” (between-zone) traffiC)

Policy Lookup: The firewall will check its Security policy rulebase from top to bottom, looking for an explicit “allow” rule (e.g., Source Zone: DMZ, Dest Zone: Trust, App: sql, Action: Allow).

Default Policy: If no custom “allow” rule is found that matches this traffic, the session will eventually hit the very last rule in the rulebase: the “interzone-default” policy.

“interzone-default”: This policy is pre-configured, cannot be deleted, and has an action of Deny.

Therefore, any traffic between any two different zones is denied by default until you, the administrator, explicitly create a Security policy to allow it.

Why A (Allowed) is Incorrect: This is a critical misunderstanding of zone-based firewalls. There is no concept of “implicitly trusted” zones. A zone is just a label. The policy dictates the trust.

Why B (Intrazone-deny) is Incorrect: This is close, but uses the wrong term. “Intrazone” means traffic within the same zone (e.g., from a “DMZ” server to another “DMZ” server). The default policy for intrazone traffic (“intrazone-default”) is also “deny” (on newer PAN-OS versions), but this scenario describes interzone traffiC)

Why D (Allowed, but only…) is Incorrect: This is not a default behavior. The default is to “deny” all applications, not to selectively allow some.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!