Visit here for our full Palo Alto Networks NGFW-Engineer exam dumps and practice test questions.
Question 61:
A network security engineer is configuring a GlobalProtect deployment and needs to differentiate the functions of the Portal and the Gateway. The deployment must support remote users on multiple operating systems, provide a list of available gateways based on priority, and also deliver the initial client configuration. Which component is exclusively responsible for storing and distributing the agent configurations, certificate information, and the prioritized list of gateways?
A) The GlobalProtect Gateway, which manages the client configuration and builds the VPN tunnel.
B) The GlobalProtect Portal, which authenticates users and builds the primary VPN tunnel.
C) The GlobalProtect Portal, which provides the client configuration and list of available gateways.
D) The GlobalProtect Gateway, which authenticates the user and provides the client configuration.
Correct Answer: C
Explanation:
This question targets a foundational concept of the GlobalProtect architecture: the distinct separation of duties between the Portal and the Gateway. Understanding this difference is critical for proper deployment and troubleshooting.
Why C) The GlobalProtect Portal, which provides the client configuration and list of available gateways is Correct: The GlobalProtect Portal’s primary and exclusive role is to act as the management and configuration distribution point for all GlobalProtect agents. When an agent first connects, it always contacts the Portal. The Portal’s job is to authenticate the agent (and/or user) and, upon success, deliver the correct agent configuration XML file. This configuration file contains all the vital settings for the client, such as tunneling options (full-tunnel vs. split-tunnel), and, most importantly, a list of one or more GlobalProtect Gateways that the agent is permitted to connect to. This list is often prioritized by the administrator (e.g., based on geographic location) to ensure optimal performance. In summary, the Portal is the boss that hands out the instructions and the map (gateway list); it does not build the data tunnel itself.
Why A) The GlobalProtect Gateway, which manages the client configuration and builds the VPN tunnel is Incorrect: This statement incorrectly assigns the Portal’s main job to the Gateway. The Gateway does not manage or distribute the client configuration; it receives it from the Portal via the agent. While the Gateway does build the VPN tunnel (the data plane part of the statement is correct), it cannot do so until the Portal has first told the agent that this Gateway exists. Therefore, this option confuses the roles.
Why B) The GlobalProtect Portal, which authenticates users and builds the primary VPN tunnel is Incorrect: This option is also incorrect. While the Portal does perform an initial authentication, its primary job is configuration delivery. It absolutely does not build the primary VPN tunnel for user data. That is the exclusive role of the Gateway. This statement blends the authentication role of the Portal with the tunneling role of the Gateway, which is a fundamental misunderstanding.
Why D) The GlobalProtect Gateway, which authenticates the user and provides the client configuration is Incorrect: This is doubly incorrect. The Gateway does not provide the client configuration; that is the Portal’s job. Furthermore, while the Gateway does perform its own authentication of the user/agent before establishing a tunnel, the initial authentication to get the config happens at the Portal. This option wrongly assigns the Portal’s main responsibility to the Gateway.
Question 62:
An organization is deploying an SSL Forward Proxy decryption policy. The security team has a strict compliance requirement to not decrypt any traffic destined for the ‘Financial-Services’ and ‘Health-and-Medicine’ URL categories. However, all traffic destined for ‘Social-Networking’ must be decrypted and inspected. An engineer configures the following Decryption Policy:
Rule 1: Source: any, Destination: any, URL Category: ‘Social-Networking’, Action: Decrypt
- Rule 2: Source: any, Destination: any, URL Category: ‘Financial-Services’, ‘Health-and-Medicine’, Action: No-Decrypt
What is the functional outcome of this configuration?
A) The policy is correct. Social-Networking traffic will be decrypted, and financial traffic will not be decrypted.
B) The policy will fail to commit because ‘Decrypt’ rules cannot be placed before ‘No-Decrypt’ rules.
C) The policy is flawed. Financial and health traffic will be decrypted if the firewall has not yet identified the URL category.
D) The policy is flawed. A website categorized as both ‘Social-Networking’ and ‘Financial-Services’ would be decrypted, violating compliance.
Correct Answer: D
Explanation:
This question tests the critical understanding of policy evaluation order, which is top-down, first-match. This concept is paramount in Decryption policy, where a mistake can lead to significant compliance violations.
Why D) The policy is flawed. A website categorized as both ‘Social-Networking’ and ‘Financial-Services’ would be decrypted, violating compliance is Correct: Palo Alto Networks policies, including Decryption, are processed sequentially from top to bottom. The firewall executes the action of the first rule that matches the traffic. In this scenario, Rule 1 is a ‘Decrypt’ rule. If a user visits a site that PAN-DB categorizes as both ‘Social-Networking’ and ‘Financial-Services’ (e.g., a financial company’s LinkedIn page), the traffic will be evaluated. It will match Rule 1 first because it contains the ‘Social-Networking’ category. The firewall will immediately apply the ‘Decrypt’ action and stop processing any further decryption rules. The traffic will never reach Rule 2, which contains the ‘No-Decrypt’ action. This results in the decryption of sensitive financial traffic, which is a direct violation of the stated compliance requirement. The correct design is to always place explicit ‘No-Decrypt’ rules for sensitive categories at the very top of the policy.
Why A) The policy is correct. Social-Networking traffic will be decrypted, and financial traffic will not be decrypted is Incorrect: This statement is false because it fails to account for the top-down, first-match logic and the possibility of overlapping URL categories. It assumes that traffic can only match one rule, which is not how the evaluation works.
Why B) The policy will fail to commit because ‘Decrypt’ rules cannot be placed before ‘No-Decrypt’ rules is Incorrect: This is technically false. The firewall’s commit process will not fail; it is a syntactically and logically valid configuration from the firewall’s perspective. The firewall does not understand the administrator’s intent or the external compliance requirements. It will happily commit this policy. The problem is not a commit failure; it is a critical design flaw.
Why C) The policy is flawed. Financial and health traffic will be decrypted if the firewall has not yet identified the URL category is Incorrect: This misdiagnoses the problem. If the firewall has not yet identified the URL category, the traffic would likely fall through both rules and be handled by the default action (typically no-decrypt). The problem described in option D is about what happens when the category is known and matches the first rule, not when it is unknown.
Question 63:
An administrator has configured an Active/Passive HA pair. To detect upstream outages, a Path Monitoring profile has been configured to monitor the primary ISP’s gateway (198.51.100.1) via ICMP. A Link Monitoring profile is also configured for the external interface (ethernet1/1). During a maintenance window, the upstream ISP router (198.51.100.1) goes offline, but the physical link between the firewall and the ISP’s switch remains up. What is the expected failover behavior?
A) No failover will occur because Link Monitoring shows the interface is still up.
B) The firewall will enter a suspended state because of the conflicting monitoring information.
C) A failover will be triggered because the Path Monitoring profile will detect the unreachability of the ISP gateway.
D) A failSover will only be triggered if the HA1 link also fails, as this is a dual-failure scenario.
Correct Answer: C
Explanation:
This scenario is designed to test the understanding of the different HA monitoring mechanisms and their hierarchy. Specifically, it pits Link Monitoring (physical state) against Path Monitoring (logical reachability).
Why C) A failover will be triggered because the Path Monitoring profile will detect the unreachability of the ISP gateway is Correct: This is the exact purpose of Path Monitoring. The firewall understands that a physical link being up (as reported by Link Monitoring) does not guarantee the ability to pass traffic. Path Monitoring is a more intelligent, logical check. The HA process will continuously send ICMP pings (or other probes) to the configured destination (198.51.100.1). When that upstream router goes offline, those pings will fail. After a configured threshold, the firewall will declare the path down. This path failure is treated as a critical event, and it will trigger an HA failover to the passive device. The firewall correctly assumes that if it cannot reach its gateway, it cannot perform its job, and the passive device should take over.
Why A) No failover will occur because Link Monitoring shows the interface is still up is Incorrect: This is the exact problem Path Monitoring was designed to solve. If only Link Monitoring were configured, this statement would be true, and the firewall would become a black hole for traffic. However, because Path Monitoring is configured, it overrides the less-intelligent Link Monitoring status for failover decisions.
Why B) The firewall will enter a suspended state because of the conflicting monitoring information is Incorrect: The firewall HA logic is not confused by this. The information is not conflicting; it is layered. The link is up, but the path is down. The path status is the more critical metric, and the firewall will act on it. A suspended state is a specific HA state that is not triggered by this monitoring scenario.
Why D) A failover will only be triggered if the HA1 link also fails, as this is a dual-failure scenario is Incorrect: This is not a dual-failure scenario. The failure of a monitored path is, by itself, a sufficient and valid trigger for failover. The health of the HA1 (control) link is irrelevant to this specific failover condition.
Question 64:
A security administrator needs to configure User-ID to collect IP-to-user mappings from a Windows Domain Controller. The corporate security policy prohibits using any account with Domain Admin privileges. The administrator configures the agentless User-ID feature on the firewall. Which permission is required for the service account used by the firewall to successfully monitor the Security Event Logs on the Domain Controller?
A) The service account must be a member of the ‘Domain Admins’ group.
B) The service account must be a member of the ‘Server Operators’ group.
C) The service account must be a member of the ‘Event Log Readers’ group.
D) The service account must have ‘Local Administrator’ rights on the firewall itself.
Correct Answer: C
Explanation:
This question tests the principle of least privilege as it applies to Palo Alto Networks User-ID configuration. Using over-privileged service accounts is a major security risk, and the exam requires knowledge of the minimum necessary permissions.
Why C) The service account must be a member of the ‘Event Log Readers’ group is Correct: For the User-ID process (either agentless on the firewall or using a Windows-based agent) to read the Security Event Logs from a Domain Controller, it needs permission to access those logs. The built-in Windows group ‘Event Log Readers’ is specifically designed for this purpose. It grants an account the ability to read all event logs, including the Security log (which contains the crucial Login Events, like Event ID 4624), without granting any other administrative rights. This is the best-practice, least-privilege method for this integration. Additional permissions (like WMI) might be needed for other functions like client probing, but for reading the logs, ‘Event Log Readers’ is the key.
Why A) The service account must be a member of the ‘Domain Admins’ group is Incorrect: This is explicitly prohibited by the scenario and is a major security anti-pattern. While being a Domain Admin would work (as it is all-powerful), it is not the required permission and grossly violates the principle of least privilege.
B) The service account must be a member of the ‘Server Operators’ group is Incorrect: The ‘Server Operators’ group provides a high level of privilege, including the ability to log on interactively, shut down servers, and manage services. While this might include the ability to read event logs, it is far more privileged than necessary and is not the correct, least-privilege answer.
Why D) The service account must have ‘Local Administrator’ rights on the firewall itself is Incorrect: This confuses the permissions on the Windows Domain Controller with permissions on the Palo Alto Networks firewall. The service account is an Active Directory account used to authenticate to Windows. It has no role, account, or permission level on the firewall’s local admin roles.
Question 65:
An administrator is managing a global deployment of 200 firewalls using Panorama. The administrator needs to configure the DNS server IP address (Device > Setup > Services) on all firewalls. However, each of the 200 sites has a different local DNS server IP. How can the administrator use Panorama to manage this without creating 200 separate Templates?
A) This is not possible; 200 individual Templates must be created.
B) By creating a single Template, using a Template Variable for the DNS IP address, and assigning the Template to a Template Stack.
C) By creating a single Device Group and using a variable in the Security policies.
D) By creating a single Template Stack and overriding the DNS server IP address on each of the 200 devices.
Correct Answer: B
Explanation:
This scenario is the poster child for the power and scalability of Panorama Template Variables. The challenge is to manage a setting that is structurally common (everyone needs a DNS IP) but operationally unique (everyone’s IP is different).
Why B) By creating a single Template, using a Template Variable for the DNS IP address, and assigning the Template to a Template Stack is Correct: This is the most scalable and correct solution. The administrator would create a single Template (e.g., ‘Global-Settings’). In the DNS server IP field, instead of typing an IP address, they would create and use a Template Variable (e.g., $Primary_DNS). They would then add this Template to a Template Stack, and assign all 200 firewalls to that stack. Panorama will then show a list of all 200 devices in the stack with a column for the $Primary_DNS variable. The administrator can then import a CSV or manually enter the unique IP address for each of the 200 firewalls in that one view. This allows for managing a common configuration base while substituting unique values per device.
Why A) This is not possible; 200 individual Templates must be created is Incorrect: This is the brute-force method that Panorama variables were specifically designed to prevent. While it would work, it is an unmanageable nightmare and not the correct or scalable approach.
Why C) By creating a single Device Group and using a variable in the Security policies is Incorrect: This is fundamentally wrong. Device Groups are for managing Policies (Security, NAT, Decryption). Templates are for managing Device and Network configurations (like DNS servers). This option confuses the two core components of Panorama.
Why D) By creating a single Template Stack and overriding the DNS server IP address on each of the 200 devices is Incorrect: This is close, but less elegant and scalable than using a variable. An override would be more difficult to manage at scale. The intended and designed feature for this exact use case (substituting values like IPs, hostnames, etc.) is the Template Variable. Option B is the more precise and correct answer.
Question 66:
A company is using Palo Alto Networks SD-WAN. A traffic distribution profile is configured to use two ISPs: ISP-A (Fiber) and ISP-B (Cable). The policy is set to distribute load based on session-count, with a threshold of 250 sessions. An application session is established and is currently using the ISP-A path. The session count on ISP-A then exceeds 250. What happens to new application sessions, and what happens to the existing session?
A) New sessions are steered to ISP-B. The existing session is immediately terminated.
B) New sessions are steered to ISP-B. The existing session continues on ISP-A until it terminates naturally.
C) New sessions are steered to ISP-B. The existing session is immediately re-routed to ISP-B to alleviate the load.
D) New sessions and the existing session all continue on ISP-A until the path fails its health check.
Correct Answer: B
Explanation:
This question tests the logic of SD-WAN traffic distribution and, most importantly, how it handles existing stateful sessions, which is a critical concept for avoiding application disruption.
Why B) New sessions are steered to ISP-B. The existing session continues on ISP-A until it terminates naturally is Correct: The SD-WAN traffic distribution logic (like session-count, bandwidth, etc.) is evaluated at the beginning of a new session. When the session-count on ISP-A exceeds the 250-session threshold, the SD-WAN logic determines that this path is now over-capacity. It will look for the next best path, which is ISP-B. Therefore, all new session requests will be directed to ISP-B. However, the SD-WAN feature is stateful and understands that the existing session is a live, established connection. Arbitrarily terminating it (Option A) or re-routing it (Option C) would break the application (e.g., drop a VoIP call, kill a file transfer). To ensure session persistence and a good user experience, the existing session is allowed to continue on its original path (ISP-A) until it is closed by the user/application.
Why A) New sessions are steered to ISP-B. The existing session is immediately terminated is Incorrect: This would be a terrible user experience. SD-WAN is designed to improve application performance, not randomly terminate sessions due to load rebalancing. This is not how stateful firewalls operate.
Why C) New sessions are steered to ISP-B. The existing session is immediately re-routed to ISP-B to alleviate the load is Incorrect: While some advanced SD-WAN features can attempt seamless session moves, this is not the default behavior for a load-distribution threshold. Moving a stateful TCP session to a new path (which likely involves a new Source IP via NAT) is extremely complex and almost certain to break the session. The default, safe action is to let the existing session die naturally.
Why D) New sessions and the existing session all continue on ISP-A until the path fails its health check is Incorrect: This defeats the purpose of the session-count load-distribution. If new sessions continued to use ISP-A, the load would keep increasing. The threshold exists to trigger a change in new session placement, and this option claims that change would not happen.
Question 67:
A firewall administrator is troubleshooting a commit failure on a VM-Series firewall deployed in a public cloud environment. The commit error indicates that the firewall has exceeded its licensed limit for the number of Security policies. Which component is responsible for defining and enforcing this specific per-model capacity limit?
A) The Palo Alto Networks WildFire cloud.
B) The Panorama management server.
C) The VM-Series model license that was applied.
D) The Security Profile attached to the rules.
Correct Answer: C
Explanation:
This question tests the understanding of how VM-Series firewalls are licensed and how those licenses dictate the firewall’s capacity. Unlike hardware appliances, virtual firewalls have their capabilities defined by their license model.
Why C) The VM-Series model license that was applied is Correct: VM-Series firewalls are sold in different models (e.g., VM-50, VM-100, VM-300, VM-500, VM-700) to match different performance and capacity needs. The license associated with each model does more than just enable the firewall; it defines its resource limits. These limits include maximum session count, max NAT rules, max IPSec tunnels, and, as in this scenario, the maximum number of Security policy rules. The VM-50, for example, has a much lower rule limit than a VM-700. When the administrator tries to commit a configuration that exceeds this license-defined limit, the commit will fail with an error message stating that a capacity limit has been exceeded.
Why A) The Palo Alto Networks WildFire cloud is Incorrect: The WildFire cloud is a sandboxing service for malware analysis. It has absolutely no role in licensing, defining, or enforcing the security policy capacity of a VM-Series firewall.
Why B) The Panorama management server is Incorrect: While Panorama is used to push the configuration to the VM-Series, it is not the component that defines the capacity limit. Panorama simply delivers the config; it is the VM-Series firewall’s own local license that determines if it can accept and commit that configuration. The commit failure is happening on the VM-Series firewall itself, not on Panorama.
Why D) The Security Profile attached to the rules is Incorrect: Security Profiles (Anti-Virus, Anti-Spyware, etc.) are objects that are attached to Security policy rules to apply threat inspection. They do not in any way define or control the total number of Security policy rules that the firewall can have.
Question 68:
An administrator configures a WildFire Analysis Profile and applies it to a security rule. A user then downloads a file with a previously unknown hash. The firewall forwards the file to the WildFire cloud. Five minutes later, WildFire returns a ‘malicious’ verdict. Where would the administrator look to see the log entry that correlates the original user and file download with this newly received ‘malicious’ verdict?
A) In the Traffic log, by filtering for the ‘wildfire’ application.
B) In the Data Filtering log, as the file was a data exfiltration attempt.
C) In the WildFire Submissions log; this log is updated with the verdict.
D) In the Threat log, by filtering for the log type ‘wildfire’.
Correct Answer: D
Explanation:
This question is about the WildFire logging workflow. The firewall generates different logs at different stages of the process, and it’s crucial to know where the final verdict is recorded.
Why D) In the Threat log, by filtering for the log type ‘wildfire’ is Correct: This is a key concept of WildFire’s retroactive logging. When the file is first downloaded, it is unknown, so it is allowed. A log entry is created in the WildFire Submissions log showing the file was forwarded (this is option C, which is a distractor). When the verdict is returned five minutes later, the firewall cannot go back in time and change the original Traffic log. Instead, it generates a brand new log entry at the time the verdict is received. This new log is a Threat Log of the type ‘wildfire’. This new Threat log will contain the ‘malicious’ verdict and, critically, will include all the session details from the original download (user, source IP, application, etc.), effectively correlating the verdict back to the event.
Why A) In the Traffic log, by filtering for the ‘wildfire’ application is Incorrect: The Traffic log shows the original session. The application would be ‘ssl’ or ‘web-browsing’, and the action would be ‘allow’. This log will not be updated retroactively with the malware verdict. The ‘wildfire’ application itself is not what is logged in the Traffic log for the user’s download.
Why B) In the Data Filtering log, as the file was a data exfiltration attempt is Incorrect: The Data Filtering log is for DLP (Data Loss Prevention) profiles, which look for specific data patterns (like credit card numbers) in outbound traffic. This is a malware analysis scenario, not a DLP scenario.
Why C) In the WildFire Submissions log; this log is updated with the verdict is Incorrect: This is a subtle but important distinction. The WildFire Submissions log is the first log generated. It confirms that the file was sent to WildFire and shows its status (e.g., ‘pending’, ‘success’). While this log is updated with the verdict, the primary, actionable log entry that is used for alerting, SIEM integration, and correlation is the Threat Log. The Threat log is the authoritative record of a threat being found, whereas the Submission log is the authoritative record of a submission being made. D is the better and more correct answer for where to see the threat itself.
Question 69:
A company’s security policy dictates that users are allowed to access ‘Social-Networking’ sites, but they are not allowed to log in, post, or upload content. An administrator has applied a URL Filtering profile to a Security policy rule that allows the ‘Social-Networking’ category. How can the administrator enforce this granular requirement without blocking the entire category?
A) This is not possible with URL Filtering; it requires a Data Filtering profile.
B) By configuring the URL Filtering profile to set the ‘Social-Networking’ category action to ‘alert’.
C) By using an Application Filter in the Security policy rule to block social-media applications.
D) By using App-ID in the Security policy rule, allowing the ‘facebook-base’ application but explicitly blocking ‘facebook-posting’ and ‘facebook-chat’.
Correct Answer: D
Explanation:
This scenario is a classic example of why App-ID is a superior technology to legacy port-based or URL-based filtering. The requirement is to control features within a website, not just the website itself.
Why D) By using App-ID in the Security policy rule, allowing the ‘facebook-base’ application but explicitly blocking ‘facebook-posting’ and ‘facebook-chat’ is Correct: This is the core value of App-ID. Palo Alto Networks does not just see ‘facebook’ as a single application. It uses its signature database to differentiate the various functions within that application. ‘facebook-base’ is the App-ID for read-only browsing and viewing of the site. ‘facebook-posting’ is the separate App-ID that is triggered when a user attempts to upload content or post a comment. By creating a Security policy rule that explicitly allows ‘facebook-base’ but is followed by a rule that explicitly denies ‘facebook-posting’, the administrator can perfectly enforce the company’s granular policy.
Why A) This is not possible with URL Filtering; it requires a Data Filtering profile is Incorrect: This is not a Data Filtering (DLP) use case. DLP is for blocking specific content, like social security numbers. This requirement is about blocking an application function (posting), regardless of the content.
Why B) By configuring the URL Filtering profile to set the ‘Social-Networking’ category action to ‘alert’ is Incorrect: The URL Filtering profile operates on the entire URL category. The actions are typically ‘allow’, ‘block’, ‘alert’, ‘continue’. Setting it to ‘alert’ would still allow the posting; it would just log it. It would not enforce the policy to block the posting while allowing browsing.
Why C) By using an Application Filter in the Security policy rule to block social-media applications is Incorrect: An Application Filter is an object used to group multiple applications. This option is too vague. Option D is the specific, correct implementation. Simply blocking all social-media applications would violate the requirement to allow access for browsing.
Question 70:
A security administrator is configuring a URL Filtering profile to protect users from submitting their corporate login credentials to phishing sites. The administrator wants the firewall to prevent the submission of credentials to any site that is not on a corporate-approved whitelist. Which feature, in conjunction with User-ID, should be configured to achieve this?
A) The Data Filtering profile, by adding usernames to the profile.
B) The Credential Phishing Prevention feature within the URL Filtering profile.
C) An Anti-Spyware profile with DNS Sinkholing enabled.
D) A Decryption policy rule with an action of ‘no-decrypt’ for phishing sites.
Correct Answer: B
Explanation:
This question is about a very specific and powerful feature of the URL Filtering subscription: preventing credential theft. This goes beyond simply blocking known-bad sites.
Why B) The Credential Phishing Prevention feature within the URL Filtering profile is Correct: This feature is purpose-built for this exact scenario. When User-ID is enabled, the firewall knows the username of the person using the computer (e.g., ‘jsmith’). The Credential Phishing Prevention feature, when enabled in a URL Filtering profile, will inspect outbound HTTP/HTTPS (if decrypted) POST requests. It can be configured to detect when a user is submitting their corporate username (or password, with the Windows-based agent) to a website. The policy can be set to ‘block’ credential submissions to all URL categories except for a custom whitelist of approved corporate sites. This proactively stops a user from giving their login details to a phishing site, even a zero-day site not yet categorized as ‘phishing’.
Why A) The Data Filtering profile, by adding usernames to the profile is Incorrect: Data Filtering (DLP) is for preventing the loss of sensitive data (like PII, credit card numbers, etc.). It is not designed or optimized for detecting username/password submissions in the same way Credential Phishing Prevention is.
Why C) An Anti-Spyware profile with DNS Sinkholing enabled is Incorrect: DNS Sinkholing is a feature used to identify infected clients that are trying to perform C2 (command-and-control) DNS lookups. It has no role in preventing a user from willingly submitting their credentials to a web form.
Why D) A Decryption policy rule with an action of ‘no-decrypt’ for phishing sites is Incorrect: This would have the opposite effect. To inspect the HTTP POST data and find the username, the traffic must be decrypted. Setting a ‘no-decrypt’ rule would make the firewall blind to the credential submission.
Question 71:
An administrator is configuring a GlobalProtect Gateway and needs to provide two-factor authentication. The company uses a RADIUS server for one-time passwords (OTPs) and Active Directory for user passwords. The goal is to have the user prompted for both. How should this be configured in the firewall?
A) Create a single Authentication Profile that points to the RADIUS server, as it can proxy to Active Directory.
B) Create two separate Authentication Profiles, one for RADIUS and one for LDAP, and apply both to the Gateway.
C) Create an Authentication Sequence that includes an Authentication Profile for LDAP and an Authentication Profile for RADIUS.
D) Create a SAML Authentication Profile and configure the SAML IdP to handle both RADIUS and LDAP.
Correct Answer: C
Explanation:
This scenario tests the knowledge of how the Palo Alto Networks firewall handles multi-factor authentication (MFA) by chaining different authentication sources.
Why C) Create an Authentication Sequence that includes an Authentication Profile for LDAP and an Authentication Profile for RADIUS is Correct: The Authentication Sequence is a feature specifically designed for this purpose. An administrator first creates two separate Server Profiles (one for LDAP/AD, one for RADIUS). Then, they create two Authentication Profiles, each one pointing to one of the server profiles. Finally, they create an Authentication Sequence object. Inside this sequence, they add both Authentication Profiles (e.g., ‘LDAP_Profile’ and ‘RADIUS_Profile’). When this single Authentication Sequence object is applied to the GlobalProtect Gateway’s authentication settings, the firewall will prompt the user for the credentials for the first profile (LDAP), and upon success, will then prompt them for the credentials for the second profile (RADIUS OTP).
Why A) Create a single Authentication Profile that points to the RADIUS server, as it can proxy to Active Directory is Incorrect: While some RADIUS servers (like Microsoft NPS) can be configured to validate against AD, this assumes a specific backend configuration and doesn’t inherently provide the two-factor functionality. The firewall’s native, more robust way to handle this is with an Authentication Sequence.
Why B) Create two separate Authentication Profiles, one for RADIUS and one for LDAP, and apply both to the Gateway is Incorrect: The GlobalProtect Gateway configuration only allows for one Authentication Profile (or Authentication Sequence) to be applied. You cannot apply two of them simultaneously. The Authentication Sequence is the container object that allows you to bundle them.
Why D) Create a SAML Authentication Profile and configure the SAML IdP to handle both RADIUS and LDAP is Incorrect: While using SAML with an IdP (like Okta, Azure AD, etc.) is a very common and modern way to achieve MFA, the scenario specifically mentions LDAP and RADIUS server profiles on the firewall. This implies a legacy, direct authentication model. Given the components mentioned, the Authentication Sequence is the direct answer.
Question 72:
An engineer is configuring an Active/Passive HA pair. The HA1 control link is connected directly between the two firewalls using the dedicated HSCI ports. The HA2 data link is also connected directly between the devices. The administrator wants to add redundancy to the HA1 control link in case the HSCI port or cable fails. What is the best-practice method to achieve this redundancy?
A) Enable Path Monitoring on the HA1 interface.
B) Enable Heartbeat Backup and use the HA2 data link as the backup path.
C) Configure a backup HA1 link using an in-band data port (e.g., ethernet1/8) and a separate cable.
D) Configure a Link Aggregation Group (LAG) for the HA1 ports.
Correct Answer: B
Explanation:
This question is about HA link redundancy. A failure of the HA1 control link can lead to a split-brain scenario, so redundancy is critical.
Why B) Enable Heartbeat Backup and use the HA2 data link as the backup path is Correct: This is the built-in, designed mechanism for HA1 redundancy. The firewall can be configured to use the HA2 data link as a backup path for the HA1 heartbeats. This is done by enabling the Heartbeat Backup option in the HA configuration. If the primary HA1 link (the HSCI port) fails, the firewalls will detect this and begin sending their HA1 heartbeat messages over the HA2 link instead. This prevents a false failover or split-brain scenario, as both devices can maintain communication and correctly identify each other’s state. This does not require any extra cables or ports beyond the standard HA2 link.
Why A) Enable Path Monitoring on the HA1 interface is Incorrect: Path Monitoring is used to monitor external network paths (like ISP gateways) to trigger a failover. It is not used to monitor the HA links themselves. The HA links have their own heartbeat and monitoring mechanisms.
Why C) Configure a backup HA1 link using an in-band data port (e.g., ethernet1/8) and a separate cable is Incorrect: While you can use an in-band data port as your primary HA1 link (if you don’t have a dedicated one), this is not the method for creating a backup. The designed backup path is over the HA2 link. Configuring a second, separate HA1-backup interface is not a standard or supported configuration.
Why D) Configure a Link Aggregation Group (LAG) for the HA1 ports is Incorrect: The dedicated HA ports (HA1, HA2) do not support LACP or link aggregation. They are standard Ethernet ports that are used for a specific purpose. LAG is a feature for data plane interfaces.
Question 73:
A security team wants to prioritize network bandwidth for their critical ‘SAP’ application and strictly limit the bandwidth available to ‘youtube’. All other traffic should be treated as best-effort. Which Palo Alto Networks feature is used to enforce these application-based bandwidth rules?
A) Quality of Service (QoS)
B) Policy-Based Forwarding (PBF)
C) URL Filtering Profile
D) Application Override
Correct Answer: A
Explanation:
This is a classic use case for controlling network performance based on application. The keywords are prioritize, limit, and bandwidth.
Why A) Quality of Service (QoS) is Correct: Quality of Service (QoS) is the Palo Alto Networks feature designed for this exact purpose. QoS leverages App-ID to identify applications. The administrator can create a QoS policy that matches the ‘SAP’ application and assigns it to a high-priority bandwidth class (e.g., Class 2, guaranteed 50Mbps). They can then create another rule that matches ‘youtube’ and assigns it to a low-priority class with a maximum bandwidth limit (e.g., Class 7, max 10Mbps). This ensures that when the network is congested, SAP traffic will always get the bandwidth it needs, while YouTube is throttled.
Why B) Policy-Based Forwarding (PBF) is Incorrect: PBF is used to control the routing path of traffic based on application or source. For example, it can be used to send ‘youtube’ traffic out a cheaper, slower ISP. While this is a form of traffic engineering, it does not directly manage bandwidth priority or limits on a single link. QoS manages the bandwidth; PBF manages the next-hop.
Why C) URL Filtering Profile is Incorrect: A URL Filtering profile is used to ‘allow’ or ‘block’ access to websites based on their category. It has no capability to manage, prioritize, or limit the bandwidth consumed by those sites.
Why D) Application Override is Incorrect: An Application Override policy is used to bypass App-ID for a specific port and trust the administrator’s definition. This is the opposite of what is needed. QoS requires App-ID to be working correctly to identify ‘SAP’ and ‘youtube’ in the first place.
Question 74:
An administrator manages a large, distributed network with firewalls at the headquarters (HQ) and at multiple branch offices. A user authenticates to the network at a branch office, and the branch firewall successfully creates a User-ID mapping. The user then tries to access a resource at the HQ, which is protected by the HQ firewall. How can the HQ firewall learn the User-ID mapping from the branch firewall?
A) The HQ firewall must be configured to monitor the branch office’s Domain Controller.
B) By configuring User-ID Redistribution, using Panorama or a hub-and-spoke firewall topology.
C) This is not possible; the user must re-authenticate to the HQ firewall’s Captive Portal.
D) By enabling the ‘Forward User-ID’ option on the branch firewall’s external interface.
Correct Answer: B
Explanation:
This is a common challenge in large, multi-site deployments. A User-ID mapping is learned in one location, but the policy that needs it exists on a different firewall where the user did not authenticate.
Why B) By configuring User-ID Redistribution, using Panorama or a hub-and-spoke firewall topology is Correct: The feature designed to solve this is User-ID Redistribution. This allows the firewalls to share their IP-to-user mapping tables with each other. This can be configured in two main ways:
Via Panorama: Each firewall (HQ and branch) is configured to send its mappings to Panorama. Panorama is then configured to redistribute all mappings it receives back down to all firewalls. Panorama acts as the central clearinghouse.
Firewall-to-Firewall: A hub-and-spoke topology can be built where each branch firewall is configured to send its mappings to the HQ firewall. In this scenario, the branch firewall sends its mapping (e.g., 10.10.1.50 = ‘jsmith’) to the HQ firewall, which adds it to its own table and can then enforce user-based policy.
Why A) The HQ firewall must be configured to monitor the branch office’s Domain Controller is Incorrect: This is inefficient, adds complexity, and might not be possible due to network latency or segmentation. Furthermore, it doesn’t solve the problem if the mapping was learned via a different method (like GlobalProtect) at the branch. Redistribution is the clean, scalable solution.
Why C) This is not possible; the user must re-authenticate to the HQ firewall’s Captive Portal is Incorrect: This is false and would create a terrible user experience. The User-ID Redistribution feature was created specifically to prevent this kind of problem.
Why D) By enabling the ‘Forward User-ID’ option on the branch firewall’s external interface is Incorrect: There is no such feature. This option is a distractor. User-ID sharing is a configured, authenticated, and secure process between User-ID agents/firewalls, not a simple forwarding tick-box on an interface.
Question 75:
An administrator needs to forward all ‘Threat’ logs to a SIEM for long-term storage and analysis. At the same time, they need to forward all log types (Traffic, Threat, URL, etc.) to Panorama for centralized reporting. What is the correct object to configure to achieve this multi-destination forwarding?
A) A single Log Forwarding Profile with two match lists, one for the SIEM and one for Panorama.
B) Two separate Log Forwarding Profiles: one for the SIEM and one for Panorama, applied to the same rules.
C) A Log Forwarding Profile with a single match list that sends all logs to Panorama, which then forwards the Threat logs to the SIEM.
D) A Server Profile for the SIEM and a separate Log Forwarding Profile for Panorama.
Correct Answer: C
Explanation:
This question tests the best-practice method for log forwarding, especially in a Panorama-managed environment. The goal is to simplify configuration on the firewalls and centralize management.
Why C) A Log Forwarding Profile with a single match list that sends all logs to Panorama, which then forwards the Threat logs to the SIEM is Correct: This is the most scalable and recommended architecture. The administrator should configure the managed firewalls with one simple Log Forwarding Profile. This profile’s job is to send all log types to Panorama. This fulfills the centralized reporting requirement. Then, on Panorama itself, the administrator configures a separate log forwarding object. This object will receive all the logs from all the firewalls and can then be configured to selectively forward certain log types (like ‘Threat’) to one or more external destinations, such as a SIEM. This centralizes the SIEM forwarding logic on Panorama, making it easy to change or add new SIEMs later without touching the policy on hundreds of firewalls.
Why A) A single Log Forwarding Profile with two match lists, one for the SIEM and one for Panorama is Incorrect: While a Log Forwarding Profile can have multiple entries, this configuration would be on the firewall itself. It’s less scalable than centralizing the logic on Panorama. If the SIEM’s IP changes, the administrator would have to push a change to all firewalls. Option C is the superior architecture.
Why B) Two separate Log Forwarding Profiles: one for the SIEM and one for Panorama, applied to the same rules is Incorrect: A Security policy rule can only have one Log Forwarding Profile applied to it. You cannot apply two of them. This is a technical limitation.
Why D) A Server Profile for the SIEM and a separate Log Forwarding Profile for Panorama is Incorrect: This is confusing the objects. A Server Profile (e.g., Syslog, SNMP) is needed to define the SIEM, but that profile is then used within a Log Forwarding Profile. This option doesn’t accurately describe the full configuration.
Question 76:
An administrator has two ISP connections and wants to load-balance outbound traffic per-session to utilize both links simultaneously. Both ISP routers are connected to the ‘Untrust’ zone. The administrator has configured two static routes (one for each ISP’s gateway) with the same metric. Which additional feature must be configured on the virtual router to enable this per-session load balancing?
A) Policy-Based Forwarding (PBF)
B) ECMP (Equal-Cost Multi-Path)
C) QoS (Quality of Service)
D) SD-WAN
Correct Answer: B
Explanation:
This is a core routing question. The scenario describes a desire for load balancing where two paths to the same destination (the internet, via a default route) are available.
Why B) ECMP (Equal-Cost Multi-Path) is Correct: ECMP is the routing feature that allows a router (or firewall) to use multiple paths to the same destination when those paths have the same cost (or metric). The administrator has already set up the prerequisite: two static default routes with the same metric. The final step is to enable the ECMP feature within the virtual router configuration. Once enabled, the administrator can choose the load-balancing algorithm, such as per-session (based on a hash of source/destination IPs/ports) or per-packet. This will cause the firewall to distribute new sessions across both ISP links.
Why A) Policy-Based Forwarding (PBF) is Incorrect: PBF is used to override the routing table. It could be used for a form of load balancing (e.g., send Source subnet A to ISP-A, Source subnet B to ISP-B), but it is not the feature that enables dynamic, per-session balancing based on equal-cost routes. ECMP is the native routing-table feature for this.
Why C) QoS (Quality of Service) is Incorrect: QoS manages bandwidth priority and limits for traffic that is already being routed. It does not make the routing decision itself or decide which ISP link to use.
Why D) SD-WAN is Incorrect: While the SD-WAN feature is a much more advanced way to load-balance ISP links (using metrics like latency and jitter), the question is describing a traditional routing feature. ECMP is the foundational routing protocol feature, whereas SD-WAN is a licensed, application-aware overlay. Given the context of static routes and metrics, ECMP is the correct answer.
Question 77:
An administrator configures an IPSec tunnel between two Palo Alto Networks firewalls. The tunnel is established, and Phase 1 and Phase 2 SAs are up. However, traffic is not passing through the tunnel. The administrator has confirmed there are no NAT policies interfering and the Security policies are correct. What is the most common remaining reason for this failure?
A) The Proxy IDs do not match on both ends of the tunnel.
B) The firewall is missing a static route in the virtual router to direct traffic into the tunnel.
C) The IKE Crypto profile is using a different DH Group than the IPSec Crypto profile.
D) The Security policy rule has logging disabled, which prevents traffic flow.
Correct Answer: B
Explanation:
This is a classic IPSec troubleshooting scenario. The control plane (the tunnel SAs) is up, but the data plane (actual traffic) is not working. This almost always points to a routing or policy problem.
Why B) The firewall is missing a static route in the virtual router to direct traffic into the tunnel is Correct: Just because an IPSec tunnel is defined does not mean the firewall knows which traffic to send into it. The firewall’s virtual router makes forwarding decisions based on its routing table. If a user in the Trust zone (10.1.1.0/24) tries to reach a server at the remote site (10.10.10.0/24), the firewall must have a route for 10.10.10.0/24. This route (typically a static route) must point to the IPSec tunnel interface (e.g., ‘tunnel.1’). Without this route, the firewall will use its default route and try to send the traffic to the internet, not the tunnel. This is the most common reason why a perfectly healthy tunnel will not pass traffic.
Why A) The Proxy IDs do not match on both ends of the tunnel is Incorrect: If the Proxy IDs (which define the local and remote networks) did not match, the Phase 2 SA would not come up. The scenario states that Phase 2 is up, so the Proxy IDs must be correctly negotiated.
Why C) The IKE Crypto profile is using a different DH Group than the IPSec Crypto profile is Incorrect: The IKE Crypto profile is for Phase 1, and the IPSec Crypto profile is for Phase 2. They are separate and do not need to use the same DH Group. Furthermore, the scenario states that both Phase 1 and Phase 2 are up, which means all cryptographic negotiations were successful.
Why D) The Security policy rule has logging disabled, which prevents traffic flow is Incorrect: The logging setting on a Security policy rule has no impact on whether the rule allows or denies traffic. It is purely for visibility. A rule with logging disabled will still pass (or block) traffic perfectly fine.
Question 78:
A company is deploying a Palo Alto Networks firewall to protect an internal web server. The firewall is configured for SSL Inbound Inspection, which requires the server’s private key to be imported onto the firewall. Which Decryption policy action is used to enable this specific type of decryption?
A) SSL Forward Proxy
B) SSL Backward Proxy
C) SSH Proxy
D) SSL Inbound Inspection
Correct Answer: D
Explanation:
This question is about the two primary types of SSL decryption and their distinct use cases. The key is identifying whether the firewall is protecting clients going out or servers being accessed in.
Why D) SSL Inbound Inspection is Correct: The scenario describes protecting an internal web server from traffic coming in (e.g., from the internet or an untrusted zone). This is the use case for SSL Inbound Inspection. In this mode, the administrator imports the web server’s actual certificate and, crucially, its private key onto the firewall. When an external client connects, the firewall is able to act as the server, decrypt the incoming traffic, inspect it for threats (like SQL injection or malware uploads), and then re-encrypt it before forwarding it to the real server.
Why A) SSL Forward Proxy is Incorrect: SSL Forward Proxy is the opposite use case. It is used to protect internal clients (users) as they access the external internet. In this mode, the firewall does not need any private keys. Instead, it dynamically generates a man-in-the-middle (MITM) certificate for each site the user visits and signs it with a corporate root CA that is trusted by the client.
Why B) SSL Backward Proxy is Incorrect: This is not a valid term or feature in Palo Alto Networks. It is a distractor. The two main types are Forward Proxy and Inbound Inspection.
Why C) SSH Proxy is Incorrect: SSH Proxy is a similar concept but is used for decrypting and inspecting Secure Shell (SSH) traffic to control commands or file transfers within an SSH session. It is not used for SSL/TLS traffic to a web server.
Question 79:
An administrator wants to create a Security policy rule to block a set of malicious IP addresses that is published by a third-party threat intelligence feed. The list of IPs changes frequently. The administrator wants the firewall to automatically update this list without manual intervention. Which object type should be used as the source or destination in the Security policy rule?
A) A Security Profile Group
B) An Address Group containing static Address Objects.
C) An External Dynamic List (EDL)
D) A Dynamic Address Group (DAG)
Correct Answer: C
Explanation:
This question is about using dynamic, external intelligence in a policy. The key requirements are that the list is external (from a third party) and must be updated automatically.
Why C) An External Dynamic List (EDL) is Correct: An External Dynamic List (EDL), also known as a Dynamic Block List (DBL) in some contexts, is the feature designed for this. The administrator configures an EDL object on the firewall, pointing it to a URL that hosts the list of IPs (or domains/URLs). The firewall is then configured to poll this URL at a regular interval (e.g., every hour). If the list at the URL has changed, the firewall automatically downloads the new list and updates its policy. The administrator simply uses this single EDL object in a Security policy rule (e.g., Source: EDL-Object, Action: Deny). This set-it-and-forget-it approach is extremely efficient.
Why A) A Security Profile Group is Incorrect: A Security Profile Group is a container object used to bundle Threat Prevention profiles (AV, Anti-Spyware, etc.) together to apply to a rule. It has nothing to do with IP address lists.
Why B) An Address Group containing static Address Objects is Incorrect: This is the manual way. The administrator would have to manually monitor the threat feed, create new Address Objects, and add/remove them from the Address Group. This violates the requirement for automatic updates.
Why D) A Dynamic Address Group (DAG) is Incorrect: A Dynamic Address Group (DAG) is a powerful feature, but it populates itself based on tags, not an external list. For example, a DAG can be configured to automatically include all VMs in an AWS or vCenter environment that have a tag of ‘web-server’. This is for dynamic internal or cloud inventory, not for consuming external threat feeds.
Question 80:
An administrator has created a custom in-house application that runs on TCP port 12345. The administrator wants to create a Security policy rule that only allows this specific application, but App-ID is currently identifying it as ‘unknown-tcp’. How can the administrator create a strict policy that bypasses App-ID for this traffic and treats it as the custom application?
A) Create a custom Application signature for the traffic.
B) Create an Application Filter and add it to the rule.
C) Create a Service object for TCP 12345 and set the application to ‘any’.
D) Create an Application Override policy rule.
Correct Answer: D
Explanation:
This question focuses on how to handle non-standard or unknown applications for which App-ID signatures do not exist or are not practical to create.
Why D) Create an Application Override policy rule is Correct: An Application Override is a specific policy type designed for this exact situation. It tells the firewall’s App-ID engine to stop trying to identify the application using signatures. The administrator creates an Application Override rule that matches the traffic (e.g., Source, Destination, and Service TCP 12345). In the rule, they specify what App-ID to assign it (e.g., ‘my-custom-app’). From that point on, when the firewall sees traffic on TCP port 12345, it will immediately stop inspecting, label it ‘my-custom-app’, and move on to Security policy evaluation. The administrator can then write a Security policy rule that says ‘allow application my-custom-app’.
Why A) Create a custom Application signature for the traffic is Incorrect: While creating a custom App-ID signature is a valid and powerful feature, it is often complex and more work than is necessary for a simple, trusted, in-house application. The Application Override is the simpler, more direct way to achieve the goal of bypassing the ‘unknown-tcp’ identification.
Why B) Create an Application Filter and add it to the rule is Incorrect: An Application Filter is an object used to group existing applications. It cannot be used to re-classify unknown traffic.
Why C) Create a Service object for TCP 12345 and set the application to ‘any’ is Incorrect: This is a very insecure practice. Setting the application to ‘any’ would allow any application that happens to run over port 12345 (e.g., a tunnel, malware, etc.) to pass through the firewall. This completely defeats the purpose of a Next-Generation Firewall. An Application Override is much safer because it is still limited to that specific port and gives it a unique name that can be used in policy.