Microsoft AZ-801 Configuring Windows Server Hybrid Advanced Services Exam Dumps and Practice Test Questions Set10 Q181-200

Click here to access our full set of Microsoft AZ-801 exam dumps and practice tests.

Question 181 .You are the administrator for a large hybrid environment with hundreds of on-premises Windows Server 2022 machines onboarded as Azure Arc-enabled servers. A new corporate security policy mandates that the “Windows Remote Management (WinRM)” service must be set to “Disabled” on all servers unless a specific exception is documenteD) You need to use Azure to perform a comprehensive audit to identify all servers that are non-compliant with this policy. You also need the ability to remediate the non-compliant servers automatically in the future using the same tool. Which Azure service and feature should you utilize? 

A) Microsoft Defender for Cloud, using a Security Recommendation.
B) Azure Monitor, using a custom KQL query in a Log Analytics workspace.
C) Azure Automation, using a PowerShell runbook.
D) Azure Policy, using a Guest Configuration (GC) definition.

Correct Answer: D

Explanation: 

The correct answer is D, Azure Policy, using a Guest Configuration (GC) definition. This is the precise, purpose-built Azure service for auditing and enforcing in-guest operating system settings at scale across a hybrid fleet of servers.

Why D (Azure Policy using Guest Configuration) is Correct: The Azure Arc-enabled servers agent makes your on-premises servers visible as first-class resources within the Azure Resource Manager (ARM). This integration unlocks the ability to govern these servers using Azure’s native governance tools.

Azure Policy: This is the primary governance service in Azure. It allows you to define, assign, and manage policies that enforce or audit resource configurations.

Guest Configuration (GC): Standard Azure Policy can only assess the properties of the Azure resource itself (e.g., its tags, location, or resource-level configurations). To look inside the operating system (to check registry keys, service states, file contents, etC)), you must use the Guest Configuration feature.

How it Works: The Azure Arc agent on your on-premises server manages an extension called the “Guest Configuration extension.” You assign a specific “Policy Initiative” (a set of definitions), such as the built-in “Audit Windows servers that do not have the specified services installed,” or in this case, a custom policy to check the state of the WinRM service.

Audit and Report: You would assign this policy with the “effect” set to Audit. The GC agent on every single Arc-enabled server (all 500 of them) will periodically check the status of the “WinRM” service. It will then report its compliance state (“Compliant” if disabled, “Non-Compliant” if enabled) back to the central Azure Policy compliance dashboarD) This gives you a single pane of glass to identify all non-compliant servers.

Remediation: The prompt also mentions the “ability to remediate… automatically.” This is the other key capability of Guest Configuration. You can create a policy with the “effect” set to DeployIfNotExists. This policy would not only audit the service but, if it finds it running, would automatically execute a paired Desired State Configuration (DSC) script to set the service to “Disabled,” thus enforcing compliance. This holistic “audit-then-enforce” model is the core strength of Guest Configuration.

Why A (Microsoft Defender for Cloud) is Incorrect: Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection (CWP) solution. It uses Azure Policy as its engine to generate security recommendations. While Defender might have a built-in recommendation for this (as part of the security benchmarks), the underlying technology you would be using, and the one you would use to create a custom audit, is Azure Policy and its Guest Configuration feature. Option D is the more precise and foundational answer.

Why B (Azure Monitor using KQL) is Incorrect: Azure Monitor and Log Analytics are for data ingestion and analysis. You would first have to configure all 500 servers to send their service status (e.g., via an event log or a custom log) to a Log Analytics workspace. Then, you could write a Kusto Query Language (KQL) query to find the non-compliant servers. This is reactive and complex to set up. It is not a proactive, state-based auditing system. Furthermore, Azure Monitor is an analysis tool; it has no native remediation capability (though it could trigger an Automation runbook, which is more complex).

Why C (Azure Automation using a PowerShell runbook) is Incorrect: You could write a PowerShell runbook in Azure Automation, executed by a Hybrid Runbook Worker, to check the status of the WinRM service on all 500 servers. However, this is a “brute-force” imperative script. You would have to manage the logic, the targeting, the error handling, and the reporting yourself. It is not an auditing platform. Azure Policy Guest Configuration is a declarative system (you “declare” the desired state, and the platform figures out how to check it and report on it), which is the modern, at-scale, and correct solution for governance.

Question 182 .You are configuring security for Windows Admin Center (WAC), which is installed in gateway mode. You need to grant your helpdesk team the ability to connect to specific servers through the WAC interface. However, once connected to a server, they must only be able to view the ‘Services’ and ‘Event Logs’ extensions. They must be explicitly prevented from accessing the ‘PowerShell’, ‘Registry’, or ‘Files’ extensions within WAC for those servers. Which feature should you configure to enforce this granular, extension-level permission within the WAC interface? 

A) Windows Admin Center Role-Based Access Control (RBAC)
B) Just Enough Administration (JEA)
C) Azure AD Conditional Access
D) Dynamic Access Control (DAC)

Correct Answer: A

Explanation: 

The correct answer is A, Windows Admin Center Role-Based Access Control (RBAC). This is the native, built-in feature of Windows Admin Center (WAC) that allows you to control what a user can do within the WAC interface on a per-connection basis.

Why A (Windows Admin Center RBAC) is Correct: Windows Admin Center, when installed in gateway mode on a Windows Server, provides its own Role-Based Access Control (RBAC) mechanism to secure the gateway itself and the connections it manages. This WAC-level RBAC is distinct from Azure RBAC or Active Directory permissions.

Gateway Roles: WAC provides built-in roles at the gateway level:

Gateway Administrators: Can configure the WAC gateway settings itself.

Gateway Users: Can connect to the WAC gateway but cannot change its settings.

Gateway Readers: Can view the WAC gateway but cannot access any connections.

Per-Connection Roles (The Key Feature): The more granular and relevant feature is the per-connection RBAC) When you share a server connection with a user or group, you can assign them one of two roles for that specific server:

Administrator: Has full access to all WAC extensions for that server.

Reader: Can view most information in WAC for that server but cannot make any changes.

Extension-Level Control: The most granular and powerful feature, which directly addresses the prompt, is the ability to use PowerShell to define custom roles that allow or deny access to specific WAC extensions. You can create a new WAC role (e.g., “Helpdesk-Tier1”) and explicitly define that this role only has read access to the msft.sme.services (Services) and msft.sme.event-viewer (Event Logs) extensions, while denying access to all others, such as msft.sme.powershell, msft.sme.registry, and msft.sme.files.

This native WAC RBAC capability is precisely what is needed to limit a user’s UI-level access to specific tools within the WAC web interface.

Why B (Just Enough Administration – JEA) is Incorrect: Just Enough Administration (JEA) is a PowerShell security technology. It is used to create constrained PowerShell Remoting endpoints. It limits what a user can do when they connect to a server using PowerShell (e.g., Enter-PSSession). While WAC’s own PowerShell extension can be configured to use JEA endpoints, JEA itself does not control which graphical extensions (like ‘Files’ or ‘Registry’) are visible or usable within the WAC web interface. WAC RBAC controls the WAC GUI; JEA controls the PowerShell backenD)

Why C (Azure AD Conditional Access) is Incorrect: Azure AD Conditional Access is a feature used to protect the authentication process. You would use it to enforce MFA when a user tries to log in to the WAC gateway (if WAC is configured to use Azure AD). It has no control over what the user can do (authorization) after they have successfully authenticateD)

Why D (Dynamic Access Control – DAC) is Incorrect: Dynamic Access Control (DAC) is a feature for data governance on file servers. It is used to classify files (e.g., “Confidential”) and write complex access policies (e.g., “Allow ‘Finance’ department to read ‘Confidential’ files”). It has absolutely no relationship to managing the Windows Admin Center interface.

Question 183. You are deploying Azure File Sync to a new branch office. The office has a 10 TB Azure File Share, but the local Windows Server has only a 1 TB volume for the sync cache. The server is newly built, and this is the first time you are configuring it as a “Server Endpoint.” Your goal is to get the server online and functional for users as quickly as possible. Users must be able to see the entire 10 TB file and folder structure immediately, but you want to avoid downloading any file data to the local server, as it would immediately fill the 1 TB volume. Files should only be downloaded (recalled) when a user actually tries to open them. What “Initial Download” policy should you configure when adding the Server Endpoint? 

A) Namespace only
B) Namespace first, then data
C) Recall on-demand
D) Volume Free Space Policy

Correct Answer: A

Explanation: 

The correct answer is A, “Namespace only”. This is a specific “Initial Download” (or “Initial Sync”) policy designed for exactly this scenario, where you need to populate the server with the file/folder structure (the namespace) without downloading the file contents (the data).

Why A (Namespace only) is Correct: When you add a new “Server Endpoint” to an Azure File Sync “Sync Group,” the server must perform its first-time synchronization. The “Initial Download” policy controls the behavior of this first synC)

The “Namespace”: The “namespace” refers to the file and folder hierarchy. It is the metadata—the directory structure, the file names, the file attributes (like size, permissions, and timestamps). It does not include the actual data within the files.

The “Data”: This is the content, the actual 1s and 0s, that make up the files.

“Namespace only” Policy: By selecting this policy, you are instructing the Azure File Sync agent to perform a one-way, initial download of only the namespace from the Azure File Share. The agent will rapidly build out the full 10 TB directory structure on the local 1 TB volume. All files in this structure will be created as “tiered” files (also known as “reparse points”).

Result: From a user’s perspective (looking in File Explorer), it looks like all 10 TB of data are present. They can navigate the entire folder tree. However, all the file icons will have a “gray X” or “cloud” icon, and their “Size on disk” will be 0 bytes. When a user tries to open one of these files, the Azure File Sync agent’s file system filter will intercept the request and then download (recall) that one specific file on-demanD) This perfectly achieves the goal of making the full structure visible immediately without downloading any data, thus fitting the 10 TB namespace onto the 1 TB volume.

Why B (Namespace first, then data) is Incorrect: This policy is a two-step process. It first downloads the entire namespace (like option A), but then it immediately begins downloading all the data for all files in the backgrounD) This would attempt to download 10 TB of data onto the 1 TB volume, which would quickly fail, fill the disk, and not meet the requirement.

Why C (“Recall on-demand”) is Incorrect: “Recall on-demand” is a description of how cloud tiering works, but it is not the name of the “Initial Download” policy. The user’s action of opening a tiered file triggers an on-demand recall. The policy you set to create this initial tiered state is “Namespace only.”

Why D (Volume Free Space Policy) is Incorrect: The “Volume Free Space Policy” (e.g., “keep 20% free”) is a policy for ongoing cloud tiering. It is the reactive policy that purges cold data from the local cache after the server is already running to prevent the disk from filling up. It is not the initial sync policy that you configure when first adding the server endpoint.

Question 184 .You are using Storage Replica to protect a critical volume, “Data-Vol,” from a server in your primary data center (“SRV-Primary”) to a server in your disaster recovery site (“SRV-Secondary”). The replication is configured in asynchronous mode. A hurricane is approaching your primary site, and you need to perform a “planned” failover. You have successfully shut down the application on “SRV-Primary.” You need to ensure that all data from the source server’s replication log is fully flushed to the destination server (“SRV-Secondary”) before you make the destination volume writable, to ensure zero data loss. Which PowerShell cmdlet must you run on the source server (“SRV-Primary”) to accomplish this? 

A) Sync-SRGroup
B) Set-SRPartnership
C) Test-SRTopology
D) Grant-SRDelegation

Correct Answer: A

Explanation: 

The correct answer is A, Sync-SRGroup. This cmdlet is the designated command for forcing a full synchronization of a Storage Replica group, which is a necessary step in a planned failover to ensure zero data loss.

Why A (Sync-SRGroup) is Correct: When using Storage Replica in asynchronous mode, there is always a lag. The source server (“SRV-Primary”) writes data to its log, and the application gets an immediate acknowledgment. The log data is then sent to the destination (“SRV-Secondary”) on a “best-effort” basis, meaning the destination is always a few seconds or minutes behinD)

The “Planned Failover” Problem: If you just “pull the plug” on the primary, you will lose the data that was in the log but had not yet been transmitteD)

The Solution: For a planned maintenance event (like the approaching hurricane), you must perform a graceful, planned failover. After stopping the application, the source volume is no longer receiving new writes. However, you must still flush the existing data from its log.

Sync-SRGroup Cmdlet: The Sync-SRGroup cmdlet is designed for this. When you run Sync-SRGroup on the source server, you are forcing the Storage Replica service to synchronously flush all pending data from its log file across the network to the destination server. The command prompt will not return until the destination server has acknowledged receipt of 100% of the pending I/O.

Zero Data Loss: After this command completes, you now have a guarantee that the source and destination disks are 100% identical. At this point, you can safely run Set-SRPartnership to switch the replication direction, which will bring the destination volume (“SRV-Secondary”) online in a writable state with zero data loss.

Why B (Set-SRPartnership) is Incorrect: The Set-SRPartnership cmdlet is the command you run after the data is fully synceD) This is the command that actually performs the failover by “flipping” the roles, making the source “SRV-Primary” the new destination (read-only) and the destination “SRV-Secondary” the new source (writable). You must run Sync-SRGroup before this command in a planned, asynchronous failover to prevent data loss.

Why C (Test-SRTopology) is Incorrect: Test-SRTopology is a planning and diagnostic tool. You run this before you even configure replication. It analyzes a server and a potential partner to determine if the network, storage, and software are suitable for Storage Replica. It will report on latency, throughput, and log-size requirements. It does not manage an active replication partnership.

Why D (Grant-SRDelegation) is Incorrect: Grant-SRDelegation is a configuration cmdlet used in a specific scenario. It delegates permissions to a non-administrator user or group, allowing them to create, modify, or remove Storage Replica partnerships without being a full local administrator on the storage servers. It has no role in the operational failover process.

Question 185. You are using Azure Site Recovery (ASR) to protect your on-premises Hyper-V virtual machines. A disaster has occurred, and you have successfully performed a “Failover” operation from the Azure portal. Your critical virtual machine, “VM-APP01,” is now running as an Azure VM in your recovery VNet. The failover is complete, and the VM has been “CommitteD)” Several days later, your on-premises data center is repaired and back online. You now need to replicate the changes back from the Azure VM (“VM-APP01”) to your original on-premises Hyper-V host. What is the first major action you must take in the Azure portal for “VM-APP01”? 

A) Select “Failback” for the virtual machine.
B) Select “Re-protect” for the virtual machine.
C) Select “Change replication” for the virtual machine.
D) Select “Disable replication” for the virtual machine.

Correct Answer: B

Explanation: 

The correct answer is B, “Re-protect”. After a failover to Azure is “Committed,” the virtual machine is in a protected-but-unreplicated state. The “Re-protect” operation is the explicit, required first step to establish reverse replication from Azure back to the on-premises environment.

Why B (Re-protect) is Correct: The ASR failover and failback process is a multi-step, cyclical workflow.

Initial State: On-premises VM -> Replicating to -> Azure.

Failover: You initiate a “Failover.” The on-premises VM is shut down (if possible), and the replica Azure VM is created and starteD) The replication direction is now broken.

Commit: You “Commit” the failover. This action tells ASR that the failover was successful. At this point, the Azure VM is live, but it is not replicating anywhere. The original replication item is “cleaned up.”

The Problem: The Azure VM is now live and its data is changing. The on-premises VM is stale. You need to replicate these changes back to the on-premises site.

“Re-protect” (The Solution): The “Re-protect” button in the ASR vault for the failed-over item is the solution. Clicking this initiates a wizard that establishes a new replication partnership in the reverse direction. It configures the Azure VM as the source and the (now-repaired) on-premises Hyper-V host as the destination. ASR will then calculate the “delta” (what has changed on the Azure VM since the initial failover) and begin replicating only those changes back to the on-premises Hyper-V host.

Once the “Re-protect” operation is complete and the servers are in a “Protected” (but reversed) state, then and only then can you initiate the “Failback” (Option A).

Why A (Failback) is Incorrect: “Failback” is the second step in the process, not the first. The “Failback” option is often grayed out until after the “Re-protect” operation has successfully completed and the Azure VM is fully synchronized with the on-premises site. You cannot fail back to an on-premises server that is not yet in synC) “Re-protect” is the operation that gets it in synC)

Why C (Change replication) is Incorrect: “Change replication” is not a standard, top-level operation in this context. You might change replication policies, but this is not the action you take to initiate the reverse replication.

Why D (Disable replication) is Incorrect: This is the most destructive and incorrect option. “Disable replication” would permanently sever the link between the VM and the ASR vault. It would delete all recovery points and remove the VM from ASR protection entirely. You would lose your ability to fail back and would have to set up replication from scratch, which would involve transferring the entire VM disk (terabytes of data) rather than just the “delta” changes.

Question 186. Your organization is designing a “defense-in-depth” security strategy for its on-premises Windows Server 2022 environment. The security team has two primary goals:

Prevention: Proactively prevent attackers who have gained administrator privileges on a server from using tools like Mimikatz to steal NTLM hashes and Kerberos tickets from the LSASS process in memory.

Detection: Detect, in real-time, anomalous authentication behavior across the domain, such as an attacker successfully using a stolen hash (Pass-the-Hash) to move laterally from a low-value workstation to a high-value domain controller. Which combination of two Microsoft technologies should you implement to meet both of these goals? 

A) Credential Guard and Microsoft Defender for Identity
B) Windows Defender Application Control (WDAC) and AppLocker
C) Just Enough Administration (JEA) and Credential Guard
D) Microsoft Defender for Identity and Azure AD Identity Protection

Correct Answer: A

Explanation: 

The correct answer is A. This combination perfectly aligns the “Prevention” goal with Credential Guard and the “Detection” goal with Microsoft Defender for Identity.

Why A (Credential Guard and Microsoft Defender for Identity) is Correct: This option provides the ideal two-pronged “Prevent and Detect” strategy for credential theft.

Goal 1: Prevention (Credential Guard):

Credential Guard is a preventative hardening feature. It directly addresses the first goal.

It uses virtualization-based security (VBS) to create an isolated “Virtual Secure Mode” (VSM), which is protected by the hypervisor.

It moves the component of the Local Security Authority Subsystem Service (LSASS) that stores credentials (hashes and Kerberos tickets) into this VSM.

An attacker who gains administrator privileges on the server and runs Mimikatz to dump the memory of lsass.exe finds nothing. The secrets are in the VSM, which is inaccessible even to the kernel.

This prevents the initial theft of the credentials from memory.

Goal 2: Detection (Microsoft Defender for Identity):

Microsoft Defender for Identity (MDI) is a detection and analytics platform. It directly addresses the second goal.

It is a cloud-based User and Entity Behavior Analytics (UEBA) solution for your on-premises Active Directory.

You install “MDI sensors” on your domain controllers, which monitor all authentication traffic (NTLM, Kerberos, etC)).

MDI detects known attack patterns. If an attacker does manage to steal a hash (perhaps from a server without Credential Guard), and then uses that hash in a “Pass-the-Hash” attack to access a domain controller, MDI’s sensors will detect this anomalous authentication and generate a high-priority “Pass-the-Hash” alert.

This provides the real-time detection of lateral movement, as requesteD)

Together, Credential Guard “shields” the server, and MDI “watches the network” for any attacks that get through (or originate elsewhere).

Why B (WDAC and AppLocker) is Incorrect: Both of these are application allow-listing technologies. They are used to prevent unauthorized software (like Mimikatz.exe) from running. While this is a good preventative measure, it does not protect LSASS if the attacker uses a file-less or in-memory attack, and it provides no detection for authentication traffiC)

Why C (JEA and Credential Guard) is Incorrect: This option provides two preventative measures. Credential Guard (as described above) prevents credential theft. Just Enough Administration (JEA) is a technology that helps prevent attackers from gaining full administrator privileges in the first place, by delegating limited tasks via PowerShell. While both are excellent, this combination lacks the “Detection” component requested in the second goal.

Why D (Defender for Identity and Azure AD Identity Protection) is Incorrect: This option provides two detection platforms, but one is for the wrong environment. Microsoft Defender for Identity (MDI) is correct; it monitors on-premises AD) Azure AD Identity Protection is a similar UEBA tool, but it monitors cloud-native Azure AD for cloud-based risks (like “impossible travel”). It has no visibility into your on-premises Pass-the-Hash attacks. This combination also lacks the “Prevention” component (Credential Guard) requested in the first goal.

Question 187. You are designing a 6-node Storage Spaces Direct (S2D) cluster running Windows Server 2022. The cluster will host a new, large-scale VDI (Virtual Desktop Infrastructure) workload The VDI solution will store user profile disks (UPDs) on the S2D cluster. This workload is characterized by “bursty” writes (e.g., when hundreds of users log on at 8:00 AM) and a large volume of “cold” data (profiles for users who are not active). Your primary goals are to provide excellent write performance for the “logon storms” while maximizing storage efficiency for the large quantity of cold data. Which S2D resiliency type is the most appropriate for this workload? 

A) Three-way mirror
B) Mirror-accelerated parity
C) Nested resiliency
D) Dual parity

Correct Answer: B

Explanation: 

The correct answer is B, Mirror-accelerated parity. This hybrid resiliency type is specifically engineered for “mixed workloads” like VDI, which have both high-performance “hot” data and high-capacity “cold” data, by balancing performance and efficiency.

Why B (Mirror-accelerated parity) is Correct: Mirror-accelerated parity (MAP) is the ideal choice for this VDI scenario because it creates a single volume that internally operates as two tiers.

Mirror Tier (Performance): A portion of the volume (e.g., 20%, which you can configure) is set up as a high-performance three-way mirror. This tier acts as a high-speed “write cache.” When the “logon storm” occurs at 8:00 AM, all the “bursty” write I/O from the User Profile Disks (UPDs) is written directly to this extremely fast mirror tier. The users’ logon experience is fast and responsive because their write operations are completing at mirror-level speeds.

Parity Tier (Efficiency): The remaining, larger portion of the volume (e.g., 80%) is configured as dual parity. This tier is not fast for random writes, but it is extremely space-efficient (e.g., ~75-80% efficient on a 6-node cluster).

Automatic Data Rotation: Storage Spaces Direct runs a background optimization task. It identifies “cold” data in the fast mirror tier (e.g., profile data for users who have logged off) and automatically moves that data from the mirror tier to the efficient parity tier. This “de-stages” the cold data, freeing up space in the fast mirror tier for the next logon storm or burst of hot writes.

This hybrid approach perfectly matches the workload: it provides the performance of a mirror for the “hot” data (logons) and the storage efficiency of parity for the large volume of “cold” data (inactive profiles).

Why A (Three-way mirror) is Incorrect: A pure three-way mirror would provide excellent performance for the logon storm. However, it would have a terrible storage efficiency of 33.3%. The “large volume of cold data” (the inactive profiles) would also be stored with three copies, consuming three times the space. This is an extremely inefficient and expensive way to store cold data.

Why C (Nested resiliency) is Incorrect: Nested resiliency is a specialized, high-availability feature designed exclusively for two-node S2D clusters. It is not a standard or appropriate resiliency type for a 6-node cluster.

Why D (Dual parity) is Incorrect: A pure dual parity volume would provide excellent storage efficiency. However, it would have catastrophic performance for the VDI logon storm. Dual parity (like RAID-6) has a very high “write penalty” for random I/O. The “bursty” writes at 8:00 AM would be extremely slow, and the user experience would be unacceptable, with logon times potentially taking many minutes.

Question 188. You are the administrator for a 12-node Hyper-V failover cluster that runs critical 24/7 workloads. You use Cluster-Aware Updating (CAU) in “self-updating” mode to patch the cluster on the third Sunday of each month. One of the applications running in a VM on this cluster, “App-Finance,” is notoriously sensitive and must be gracefully shut down inside the guest OS before its host node is rebooted for patching. Likewise, it must be verified as running after the host is back online. How can you automate this custom pre- and post-patching action as part of the CAU workflow? 

A) Use the “PreUpdateScript” and “PostUpdateScript” parameters in the CAU Run Profile.
B) Create a Data Collection Rule (DCR) in Azure Monitor to trigger a runbook.
C) Configure the VM’s “Automatic Stop Action” in Hyper-V settings to “Shut down”.
D) Implement Just Enough Administration (JEA) on the Hyper-V hosts.

Correct Answer: A

Explanation: 

The correct answer is A. Cluster-Aware Updating (CAU) has a powerful, built-in extensibility model that allows you to specify custom PowerShell scripts (PreUpdateScript and PostUpdateScript) that run at specific points during the “Updating Run.”

Why A (PreUpdateScript and PostUpdateScript) is Correct: Cluster-Aware Updating (CAU) is designed to be a flexible orchestration engine, not just a simple patch-and-reboot tool. The “CAU Run Profile” is an XML file or a set of parameters that defines how the Updating Run should behave.

PreUpdateScript Parameter: This parameter allows you to specify a path to a PowerShell script. Before CAU takes any action on a cluster node (i.e., before it even puts the node into maintenance mode), it will execute this script. You would write a PreUpdateScript.ps1 that, for example, finds the “App-Finance” VM on that node and uses Hyper-V PowerShell cmdlets (Get-VM, Stop-VM -Graceful) to issue a graceful shutdown inside the guest OS. The CAU run will pause and wait for this script to complete successfully before proceeding.

PostUpdateScript Parameter: This parameter specifies a PowerShell script that runs after the node has been fully patched, rebooted, and has rejoined the cluster (i.e., after it has come out of maintenance mode). You would write a PostUpdateScript.ps1 that would, for example, verify the node is healthy and then start the “App-Finance” VM (if it hasn’t been configured to auto-start) and perhaps even run a test query to verify the application is responsive.

These two “hooks” allow you to integrate complex, custom, application-aware actions directly into the automated CAU patching workflow, which is exactly what the scenario requires.

Why B (DCR in Azure Monitor) is Incorrect: A Data Collection Rule (DCR) is a feature of Azure Monitor used to define what logs and metrics to collect from a machine. It has no relationship to the on-premises CAU patching process and cannot execute pre/post-patching scripts.

Why C (Configure the VM’s “Automatic Stop Action”) is Incorrect: The VM’s “Automatic Stop Action” (e.g., “Shut down,” “Save state,” “Turn off”) in Hyper-V settings defines what Hyper-V should do to that VM when the host server itself is shut down. CAU will trigger this when it reboots the host. However, this only handles the “stop” part of the process. It does not provide a “hook” for the post-update action (verifying the app is running). Using the CAU scripts is the more complete and robust solution, as it gives you control over the entire workflow, not just the shutdown.

Why D (Implement JEA) is Incorrect: Just Enough Administration (JEA) is a security feature for delegating administrative tasks via constrained PowerShell endpoints. It is used to give a low-privilege user the ability to perform a high-privilege task. It has no bearing on the automation of scripts within the CAU process.

Question 189. Your organization has a mixed file server environment. You have a legacy file server running Samba on a Linux server (e.g., “Samba-FS”) that hosts 5 TB of user data. You are planning to migrate this data to a new Windows Server 2022 server (“FS-NEW”). You want to use the Storage Migration Service (SMS) in Windows Admin Center because it can orchestrate the entire process, including the cutover. What is a primary prerequisite for using SMS to migrate from a Samba server

A) You must install the Storage Migration Service agent on the Linux server.
B) You must provide root-level SSH credentials to the SMS Orchestrator.
C) You must install the Storage Migration Service proxy service on “FS-NEW”.
D) You must first migrate the data to an intermediate Windows Server 2008 R2 server.

Correct Answer: B

Explanation: 

The correct answer is B) The Storage Migration Service (SMS) has native, built-in support for migrating from Linux-based Samba file servers. To “inventory” and “transfer” data from the Samba server, the SMS orchestrator must be able to connect to it, which it does using the standard SSH protocol.

Why B (You must provide root-level SSH credentials) is Correct: The Storage Migration Service (SMS) is a surprisingly versatile tool. While its primary use is for Windows-to-Windows migrations, it also has first-class support for migrating from NetApp filers and Linux Samba servers.

Inventory Phase: When you add a new “Source” in the SMS wizard and specify it is a Samba server, the tool will not ask for Windows credentials. Instead, it will prompt you for an SSH (Secure Shell) username and password (or a private key).

How it Works: The SMS Orchestrator (the Windows Server running the SMS service) initiates an SSH connection to the “Samba-FS” Linux server. It uses these credentials to log in.

Data Gathering: Once authenticated via SSH, the orchestrator runs a series of commands on the Linux server (e.g., smbclient, grep, awk) to discover the list of Samba shares, their paths on the local file system, and their configurations.

Transfer Phase: During the “Transfer” phase, SMS uses these same SSH credentials to perform the data copy. It uses a secure, over-SSH transfer method to pull the data from the Linux server to the SMS Orchestrator, which then pushes it to the destination “FS-NEW” server. (Note: For the cutover phase, SMS cannot take over the identity of a Linux server. The cutover feature is for Windows-to-Windows only. But the inventory and transfer are fully supported).

Therefore, providing SSH credentials (typically root-level or an account with equivalent read access to the entire file system and Samba config) is the fundamental prerequisite for allowing SMS to talk to the Linux source server.

Why A (Install SMS agent on Linux) is Incorrect: This is a key differentiator. The Storage Migration Service is agentless for the source servers. You do not install any software on the source Windows or Linux servers. The SMS Orchestrator does all the work remotely, using standard protocols (RPC/SMB for Windows, SSH for Linux).

Why C (Install SMS proxy service) is Incorrect: There is no “Storage Migration Service proxy service” component. The primary components are the “Orchestrator” (which runs on a Windows Server, often the destination server) and the “Storage Migration Service” itself (a Windows feature). There is no “proxy” for Samba.

Why D (First migrate to an intermediate server) is Incorrect: This would be a massive, unnecessary, and time-consuming “double-hop” migration. SMS was explicitly designed to avoid this by adding direct support for Samba sources. You can migrate directly from “Samba-FS” to “FS-NEW”.

Question 190. You are designing a security solution using Just Enough Administration (JEA) to allow a junior administrator group to manage services on a production server. You need to create a JEA configuration that allows this group to use only the Restart-Service and Get-Service cmdlets. Furthermore, you must restrict them to only being able to restart the “Spooler” and “WinRM” services; they must be blocked from restarting any other service, such as “Kerberos.” In which JEA configuration file would you define these highly granular command and parameter restrictions? 

A) In the Session Configuration File (.pssc)
B) In the Role Capability File (.psrc)
C) In the PowerShell Transcript Log File (.txt)
D) In the Group Policy Object (GPO) Administrative Template

Correct Answer: B

Explanation: 

The correct answer is B, in the Role Capability File (.psrc). This is the component of a Just Enough Administration (JEA) endpoint that defines the permissions—the “what” (which commands, functions, and parameters a user is allowed to run).

Why B (In the Role Capability File – .psrc) is Correct: A JEA endpoint configuration is composed of two main types of files:

Session Configuration File (.pssc): This file defines the session’s properties. It answers “who” and “how.” It defines who can connect to the endpoint (which user groups), how they connect (e.g., as a virtual administrator account), and which “Role Capabilities” are available in this session.

Role Capability File (.psrc): This file defines the permissions of a “role.” It answers “what.” This is where you create the highly granular allow-list of commands.

VisibleCmdlets: In this file, you would create an entry for VisibleCmdlets. You would list Get-Service and Restart-Service. This makes only these two cmdlets visible in the user’s session.

Parameter Constraints (The Key Feature): The .psrc file allows you to define a ValidateSet or ValidatePattern for the parameters of a visible cmdlet. For the Restart-Service cmdlet, you would add a section for its -Name parameter and define a ValidateSet containing only “Spooler” and “WinRM.”

When the junior admin connects, they will only be able to see and run Get-Service and Restart-Service. If they run Restart-Service -Name “Spooler”, it will succeeD) If they attempt to run Restart-Service -Name “Kerberos”, the JEA endpoint will block the command before it even executes, because “Kerberos” is not in the ValidateSet defined in the Role Capability File. This provides the exact, granular, parameter-level control the scenario demands.

Why A (In the Session Configuration File – .pssc) is Incorrect: The Session Configuration File (.pssc) is the “glue” that holds the endpoint together. It defines which .psrc files to load for the session, but it does not contain the definitions of the allowed commands and parameters itself. Those definitions reside in the .psrc file(s).

Why C (In the PowerShell Transcript Log File – .txt) is Incorrect: A PowerShell transcript file is a log file. It is the output of a PowerShell session, recording all commands and their results for auditing purposes. It is a “read-only” artifact of a session that has already occurred; it is not a configuration file that defines permissions.

Why D (In the GPO Administrative Template) is Incorrect: While Group Policy can be used to deploy the JEA endpoint configuration (the .pssc and .psrc files) to servers, the Group Policy Object itself is not where you author the granular command definitions. You author the .psrc file in a text editor or PowerShell ISE/VS Code, and then you use GPO (or DSC) as the deployment mechanism.

Question 191. You are implementing a strict “default-deny” application control policy on a fleet of Windows Server 2022 Hyper-V hosts using Windows Defender Application Control (WDAC). You have successfully created a “gold image” policy by scanning a perfectly configured reference server. You are now concerned that this policy may be too restrictive and might block legitimate, but rare, administrative scripts or third-party drivers. You want to deploy the policy in a way that logs all potential violations (i.e., what would have been blocked) without actually blocking anything. This will allow you to collect these logs, analyze them, and add the necessary exceptions to your policy before “going live.” Which mode must you deploy the WDAC policy in? 

A) Audit Mode
B) Enforced Mode
C) Hypervisor-Protected (HVCI) Mode
D) JEA-Compliant Mode

Correct Answer: A

Explanation: 

The correct answer is A, “Audit Mode”. This is a specific, built-in operational mode for Windows Defender Application Control (WDAC) designed for exactly this purpose: to test a policy’s impact without enforcing it.

Why A (Audit Mode) is Correct: Windows Defender Application Control (WDAC) policies can be deployed in one of two distinct operational modes.

Enforced Mode: This is the “live” or “production” mode. When a policy is in “Enforced Mode,” the Code Integrity engine will actively block any application, script, or driver that tries to run but does not match the “allow” rules in the policy. This is the ultimate “default-deny” goal, but it is dangerous to deploy without testing.

Audit Mode (The Solution): This is the “test” or “logging-only” mode. When you deploy a WDAC policy in “Audit Mode,” the Code Integrity engine evaluates all code that attempts to run, just as it would in enforced mode. However, it does not block anything.

If a piece of code matches the policy (is allowed), nothing is loggeD)

If a piece of code does not match the policy (i.e., it would have been blocked), the Code Integrity engine allows it to run but writes a detailed “Warning” event to the Windows Event Log (specifically, in Applications and Services Logs > Microsoft > Windows > CodeIntegrity > Operational).

This allows you, as the administrator, to deploy the new policy, let the servers run for a week or a month, and then centrally collect and analyze these “audit-only” event logs. You can see exactly what legitimate software (e.g., “VMwareTools.exe” or “MyCustomScript.ps1”) would have been blockeD) You can then use this data to refine your WDAC policy and add the necessary “allow” rules before you flip the switch to “Enforced Mode.”

This “audit-first” approach is the mandatory best practice for any WDAC deployment and perfectly matches the scenario.

Why B (Enforced Mode) is Incorrect: Deploying in “Enforced Mode” is the final step, not the testing step. If you deployed the new policy in this mode, it would immediately start blocking the “legitimate, but rare” scripts, causing production outages.

Why C (Hypervisor-Protected – HVCI – Mode) is Incorrect: Hypervisor-Protected Code Integrity (HVCI) is not an operational mode (like Audit/Enforced). HVCI is a security feature that protects the WDAC policy (in either mode) from tampering. It uses virtualization-based security (VBS) to move the Code Integrity engine into an isolated VSM. You should use HVCI, but it is separate from the Audit vs. Enforced decision. You can have “HVCI-protected Audit Mode” or “HVCI-protected Enforced Mode.”

Why D (JEA-Compliant Mode) is Incorrect: “JEA-Compliant Mode” is not a real term. Just Enough Administration (JEA) is a separate security technology for PowerShell. While a WDAC policy can be used to lock down a server to allow JEA to function, there is no “JEA-Compliant Mode” for WDAC itself.

Question 192. You are configuring Azure Automation to manage your on-premises Windows Servers, which are connected via Azure ArC) You have deployed a Hybrid Runbook Worker on an on-premises server. You need to run a PowerShell runbook that performs privileged operations on other on-premises servers. This script requires elevated, local administrator credentials on the target servers. The runbook must not use the local “NT AUTHORITY\System” account of the Hybrid Runbook Worker server itself, as this account has no rights on remote machines. How should you configure the Hybrid Runbook Worker and the runbook to execute with the necessary privileged credentials? 

A) Configure the Hybrid Runbook Worker to run as a “User” (not “System”) and specify a service account that has the required permissions.
B) Store the privileged credentials in an “Azure AD Privileged Identity Management (PIM)” role.
C) Create a “Connection” asset in Azure Automation and use Get-AutomationConnection in the runbook script.
D) Configure the Hybrid Runbook Worker in “System” mode and use Grant-SRDelegation to give it rights.

Correct Answer: A

Explanation: 

The correct answer is A. A Hybrid Runbook Worker (HRW) can be configured to run in one of two security contexts: “System” (the default) or “User.” For runbooks that need to access remote network resources, the “User” context is the correct and intended configuration.

Why A (Configure HRW as “User” and specify a service account) is Correct: A Hybrid Runbook Worker (HRW) installation has two modes, which define the security context its runbooks will execute under:

System Mode (Default): The runbook executes under the local NT AUTHORITY\System account on the HRW server. This account is powerful locally but has no identity on the network. It cannot authenticate to other servers (e.g., \\Server-02\C$) or services (e.g., a SQL database). This mode is suitable only for runbooks that manage the HRW server itself.

User Mode (The Solution): This mode is specifically designed for runbooks that need to access network resources. When you configure “User” mode, you must provide a specific Active Directory user account (typically a “gMSA” – Group Managed Service Account, or a standard service account with a securely managed password). The Hybrid Runbook Worker service will then run as this specified user account.

The Benefit: When the runbook executes (e.Sg., Invoke-Command -ComputerName Server-02), it authenticates to “Server-02” as the service account (e.g., CONTOSO\svc-Automation).

The Setup: To meet the scenario’s requirement, you would:

Create an AD service account (e.g., CONTOSO\svc-Automation).

Grant this account the necessary “local administrator” rights on all the target servers.

Configure the Hybrid Runbook Worker in “User” mode, providing the credentials for CONTOSO\svc-Automation.

This allows the runbook to successfully authenticate and perform its privileged operations on the remote servers.

Why B (Store credentials in PIM) is Incorrect: Azure AD Privileged Identity Management (PIM) is for managing Azure AD privileged roles (like “Global Administrator”). It is not a credential store for on-premises AD service accounts that a runbook can query.

Why C (Create a “Connection” asset) is Incorrect: This is a plausible but less correct approach. You could leave the HRW running as “System” and then, inside the runbook script, use Get-AutomationConnection or Get-AutomationPSCredential to retrieve stored credentials (e.g., for CONTOSO\svc-Automation). You would then have to manually pass these credentials to every command (e.g., Invoke-Command -Credential $cred). This is more complex, more prone to error, and less seamless than running the entire runbook process in the correct security context from the start, which is what “User” mode (Option A) is designed for.

Why D (Configure in “System” mode and use Grant-SRDelegation) is Incorrect: This is a nonsensical combination. “System” mode, as explained, has no network identity. Grant-SRDelegation is a cmdlet related to Storage Replica, giving users permission to manage replication. It has absolutely no relationship to Azure Automation or Hybrid Runbook Workers.

Question 193. You are the administrator for a hybrid environment. You have 100 on-premises Windows Server 2022 machines connected via Azure Arc, and 50 Azure VMs. You need to collect a specific set of data from all 150 machines and send it to a central Log Analytics workspace:

 

From the “System” Event Log, only “Error” and “Critical” events.

  • From the “Application” Event Log, only “Warning” events.
  • The % Processor Time performance counter.
  • The LogicalDisk\% Free Space performance counter. You want to use the most modern and granular Azure agent to define these specific data-gathering rules and apply them to all 150 machines. Which Azure technology should you use to define and apply these rules?

 

A) The Log Analytics agent (MMA) configured in the workspace’s “Agents configuration”.
B) A Data Collection Rule (DCR) used by the Azure Monitor Agent (AMA).
C) An Azure Automation runbook that runs Get-EventLog and Get-Counter.
D) An Azure Policy Guest Configuration definition.

Correct Answer: B

Explanation: 

The correct answer is B, a Data Collection Rule (DCR) used by the Azure Monitor Agent (AMA). This is the modern, flexible, and granular solution for defining what telemetry to collect from which resources and where to send it.

Why B (A Data Collection Rule – DCR) is Correct: This scenario highlights the exact problem the AMA and DCRs were created to solve.

The Old Way (MMA): The legacy Log Analytics Agent (MMA, option A) used a monolithic, “all-or-nothing” configuration set in the workspace. You would connect an agent to a workspace, and in the “Agents configuration” tab, you would say “Collect all ‘System’ Error/Critical events” and “Collect the ‘% Processor Time’ counter.” This one configuration applied to every single agent connected to that workspace. It was not granular.

The New Way (AMA + DCR): The Azure Monitor Agent (AMA) is the new, modern, consolidated agent. It is “dumb” by default; it does not know what to collect. It must be told what to collect by associating it with a Data Collection Rule (DCR).

The DCR (The Solution): A DCR is a separate, independent Azure resource. In this DCR, you would define exactly what you want:

Data Source 1: Windows Event Logs

Stream: Microsoft-Windows-System

XPath Query: *[System[Provider[@Name=’System’] and (Level=1 or Level=2)]] (Critical/Error)

Stream: Microsoft-Windows-Application

XPath Query: *[System[Provider[@Name=’Application’] and (Level=3)]] (Warning)

Data Source 2: Performance Counters

Counter: \Processor(_Total)\% Processor Time

Counter: \LogicalDisk(*)\% Free Space

Destination: Your Log Analytics workspace.

Association: You would then associate this single DCR with all 150 of your machines (the 50 Azure VMs and the 100 Arc-enabled servers). The AMA on each machine will download this DCR and immediately begin collecting only the specific data you defineD) This is far more efficient and flexible than the old MMA.

Why A (The Log Analytics agent – MMA) is Incorrect: The MMA is the legacy agent. It is on a deprecation path and will be retireD) It does not use DCRs and lacks the granular, flexible targeting that DCRs provide. It is not the “most modern” solution.

Why C (An Azure Automation runbook) is Incorrect: You could write a runbook to manually scrape this data and send it to Azure Monitor, but this would be an incredibly complex, brittle, and inefficient custom solution. This is not what Azure Automation is for. You would be re-inventing the wheel that the AMA/DCR already provides.

Why D (An Azure Policy Guest Configuration) is Incorrect: Guest Configuration is an audit and compliance engine. It is used to check the state of a machine (e.g., “Is the WinRM service disabled?”) and report “Compliant” or “Non-Compliant.” It is not a telemetry pipeline for streaming time-series data like performance counters or event logs.

Question 194. You are designing a high-availability solution for a new 8-node Windows Server 2022 failover cluster. All 8 nodes are located within a single on-premises data center. You need to configure a quorum witness to act as a “tie-breaker” in the event of a 4-node vs. 4-node split. The company has a “cloud-first” policy and wants to use an Azure-based solution for the witness to ensure it is on a separate fault domain from the on-premises data center. However, a recent security audit has blocked all outbound internet access from the cluster nodes, except for specific, required Azure services. Which Azure service must you specifically allow outbound connectivity to from the cluster nodes to enable a Cloud Witness? 

A) Azure Active Directory (Azure AD)
B) Azure Site Recovery (ASR)
C) Azure Blob Storage
D) Azure Automation

Correct Answer: C

Explanation: 

The correct answer is C, Azure Blob Storage. The “Cloud Witness” feature for a failover cluster works by creating and maintaining a single, small “blob” file in a standard Azure Storage Account, and therefore, it requires network connectivity to that storage account.

Why C (Azure Blob Storage) is Correct: A “Cloud Witness” is a quorum witness type that leverages an Azure Storage Account.

How it Works: When you configure a Cloud Witness, the cluster does not connect to a “Cloud Witness Service.” Instead, it uses a standard Azure Storage Account (which you must create first).

The “Blob”: The cluster uses the Storage Account Access Key to authenticate. It then creates a new container in that storage account, and within that container, it creates and maintains a single, 0-byte blob file (e.g., ClusterName.blob).

Quorum “Vote”: This blob file acts as the witness. The cluster nodes use the standard Azure Blob Storage REST API (which runs over HTTPS on port 443) to communicate with the storage account. They will attempt to place a “lease” or “lock” on this blob file. The node (or group of nodes) that successfully locks the blob “owns” the witness and gets its “vote.”

The Network Requirement: Because this entire mechanism relies on the cluster nodes being able to make outbound HTTPS calls to the *.bloB)core.windows.net endpoint for your storage account, you must allow this specific traffic through your firewall. If you block this, the nodes cannot “see” or “lock” the blob, the witness will fail, and the quorum will be unstable.

Therefore, the specific Azure service that must be accessible is Azure Blob Storage.

Why A (Azure Active Directory) is Incorrect: While the administrator setting up the cluster might use Azure AD to log in to the Azure portal to create the storage account, the cluster nodes themselves (the server-to-service communication) do not authenticate to Azure AD to use the witness. They use the Storage Account Access Key, which is a static secret, to authenticate directly to the Blob Storage endpoint.

Why B (Azure Site Recovery) is Incorrect: Azure Site Recovery (ASR) is a completely separate disaster recovery service. It is used to replicate entire virtual machines. It has no relationship to the failover cluster’s quorum or witness mechanism.

Why D (Azure Automation) is Incorrect: Azure Automation is a service for process automation (runbooks) and configuration management (DSC). It is not involved in the cluster quorum.

Question 195. You are using Hyper-V Replica to protect a critical virtual machine, “VM-APP01,” from your primary data center (“Primary-Site”) to a disaster recovery data center (“DR-Site”). The replication frequency is set to 5 minutes. The business continuity plan has a new requirement: in addition to the 5-minute RPO at the DR-Site, they also want to maintain a second copy of the VM at a third data center (“Archive-Site”). This third copy is for long-term archival and can be up to an hour behind How can you configure Hyper-V Replica to achieve this three-site “daisy-chain” replication? 

A) This is not possible; Hyper-V Replica only supports two sites.
B) Configure “VM-APP01” on “Primary-Site” to replicate to both “DR-Site” and “Archive-Site” simultaneously.
C) Enable “Extended Replication” on the “VM-APP01” replica at the “DR-Site” and point it to the “Archive-Site”.
D) Configure a Storage Replica partnership between the “DR-Site” and “Archive-Site” storage.

Correct Answer: C

Explanation: 

The correct answer is C) Hyper-V Replica has a built-in feature called “Extended Replication,” which is designed for this exact “daisy-chain” or “A-B-C” topology.

Why C (Enable “Extended Replication”) is Correct: A standard Hyper-V Replica partnership (called “Primary Replication”) exists between two servers: the Primary Server (“Primary-Site”) and the Replica Server (“DR-Site”).

The “A-B” Link: In this scenario, “Primary-Site” is replicating “VM-APP01” to “DR-Site” every 5 minutes. “DR-Site” holds the replica VM.

The “B-C” Link (Extended Replication): The “Extended Replication” feature allows you to take the Replica VM (the one sitting on “DR-Site”) and configure it as a “Primary” server for a second, new replication partnership.

How it Works: You would go to the Hyper-V Manager on the “DR-Site” server. You would right-click the replica VM (“VM-APP01”) and select “Replication > Enable Extended Replication…”. A wizard will start, asking you for the name of the “Extended Replica Server,” which would be your server at “Archive-Site.”

Resulting Flow: This creates the following “daisy-chain”:

“Primary-Site” -> (5-min RPO) -> “DR-Site”

“DR-Site” -> (e.g., 60-min RPO) -> “Archive-Site”

Benefit: This is highly efficient. “Primary-Site” only takes the performance hit of one replication. “DR-Site” receives the data, and then, on its own schedule (which you can set to 60 minutes), it forwards that data to the “Archive-Site.” This perfectly meets the three-site requirement.

Why A (This is not possible) is Incorrect: This is factually incorrect. This exact scenario is the reason the “Extended Replication” feature was createD)

Why B (Replicate to both sites simultaneously) is Incorrect: This is not supporteD) A “Primary” Hyper-V VM can only have one replication partnership. It cannot “fan out” its replication data to two different replica servers at the same time. The only way to get the data to a third site is by “daisy-chaining” (extending) it from the second site.

Why D (Configure Storage Replica) is Incorrect: Storage Replica is a block-level, volume-based replication technology. It has no awareness of Hyper-V or virtual machines. You would be replicating the entire LUN that the “DR-Site” VM’s VHDX files sit on. This is a very “heavy-handed” and complex solution that is not integrated with Hyper-V. Hyper-V Replica’s “Extended Replication” is the native, VM-aware, and correct tool for the joB)

Question 196. You are the Active Directory administrator for a hybrid organization. You are using Azure AD Connect with Password Hash Synchronization. You have been tasked with strengthening your on-premises password security. You want to implement Azure AD Password Protection, which will block users from setting passwords that are on Microsoft’s global banned password list or your own custom banned list (e.g., “Contoso”). You have already enabled the feature in the Azure AD portal. You now need to enforce this policy for password changes that happen on-premises (e.g., when a user presses Ctrl+Alt+Del). What software components must you install, and where? 

A) The Azure AD Password Protection “DC Agent” and “Proxy Service,” both on all Domain Controllers.
B) The Azure AD Password Protection “DC Agent” on all Domain Controllers, and the “Proxy Service” on a member server.
C) The Azure AD Application Proxy Connector on a member server.
D) The Azure Arc agent on all Domain Controllers.

Correct Answer: B

Explanation: 

The correct answer is B) The on-premises deployment of Azure AD Password Protection requires two distinct components: the “DC Agent” (a password filter DLL), which must be on every DC, and the “Proxy Service,” which acts as a download gateway and should be on a member server.

Why B (DC Agent on all DCs, Proxy Service on member server) is Correct: Azure AD Password Protection works in the cloud by default, but to extend its protection to your on-premises Active Directory, you must deploy two on-premises components.

Azure AD Password Protection DC Agent: This is the enforcement component. It is a small piece of software (a password filter DLL) that you must install on every single domain controller (both read-write and read-only) in your forest.

How it Works: When a user tries to change their password, the “Set Password” request hits a DC) This agent intercepts the request before it is committed to the AD database. It validates the new password against a local copy of the banned password policies (both global and custom). If the password is “banned,” the agent rejects the password change, and the user receives an error.

Azure AD Password Protection Proxy Service: This is the communication component. The DC Agents themselves do not talk directly to the internet. To maintain a strict security boundary, you install the “Proxy Service” on a member server (or multiple member servers for HA).

How it Works: This one Proxy Service is the only component that needs to communicate outbound to Azure (over port 443). It periodically contacts the Azure AD Password Protection service to download the latest global and custom banned password policies.

Local Distribution: The “DC Agents” (on the domain controllers) then communicate internally (via RPC) with the on-premises “Proxy Service” to fetch the latest policies.

This “DC Agent on all DCs” and “Proxy on a member server” architecture is the correct and secure deployment model.

Why A (DC Agent and Proxy Service, both on all DCs) is Incorrect: This is an insecure and incorrect topology. While it might technically work, it is strongly discouraged by Microsoft. Domain Controllers are high-value security assets and should have the smallest possible attack surface. You should never install gateway/proxy services that communicate directly with the internet on a domain controller. The Proxy Service is explicitly designed to be on a separate, lower-privilege member server.

Why C (The Azure AD Application Proxy Connector) is Incorrect: The Azure AD Application Proxy Connector is a different service. It is used to publish an internal web application (like an on-premises SharePoint site) to the external internet, using Azure AD for pre-authentication. It has no relationship to filtering on-premises password changes.

Why D (The Azure Arc agent) is Incorrect: The Azure Arc agent is used to onboard servers into Azure Resource Manager for management (e.g., with Azure Policy, Monitor, etC)). It is not related to the Azure AD Password Protection feature.

Question 197. You are the administrator for a critical file server running Windows Server 2022. You have enabled the System Insights feature to provide predictive analytics on resource consumption. The built-in capabilities (forecasting CPU, Network, and Storage) are useful, but your primary application writes heavily to a specific custom performance counter named \ContosoApp\Transactions\SeC) You want System Insights to ingest data from this custom counter, analyze it, and generate its own forecast for when “Transactions/Sec” is predicted to exceed a critical thresholD) What must you do to enable this? 

A) You cannot do this; System Insights only supports its four built-in capabilities.
B) Create a new Data Collection Rule (DCR) in Azure Monitor to add the custom counter.
C) Use the Add-StorageHealthSetting PowerShell cmdlet to register the new counter.
D) Write a new System Insights “Capability” using PowerShell and a custom ML model.

Correct Answer: D

Explanation: 

The correct answer is D. The System Insights feature is designed to be extensible. While it ships with a set of built-in capabilities, it provides a full PowerShell framework (New-InsightsCapability, Add-InsightsCapabilityDataSource, etC)) that allows administrators to create and register their own custom predictive capabilities.

Why D (Write a new System Insights “Capability”) is Correct: System Insights is not a “black box” feature. Microsoft has designed it as an extensible platform.

The “Capability” Framework: A “capability” in System Insights is the combination of a data source, a machine learning model, and an output.

The Process: To add your \ContosoApp\Transactions\Sec counter, you would follow a documented, multi-step process in PowerShell:

Add-InsightsCapabilityDataSource: You would first register your custom performance counter as a new “data source” that System Insights should start collecting.

New-InsightsCapability: You would then define a new capability (e.g., “Contoso App Forecasting”).

Specify a Model: You would associate this new capability with one of the built-in machine learning models that System Insights provides (e.g., “TimeSeries-LinearRegression” or “TimeSeries-ARIMA”). You don’t have to be a data scientist; you can re-use the models that the built-in capabilities use.

Set Thresholds: You would define the output and the thresholds (e.g., “generate a ‘Warning’ when the prediction exceeds 1000 Trans/Sec” and “generate a ‘Critical’ when it exceeds 5000 Trans/Sec”).

Result: Once this new custom capability is registered, the System Insights engine will automatically start collecting the \ContosoApp\Transactions\Sec counter data, feed it into the specified ML model, and generate its own predictions (e.fs.g., “Capability ‘Contoso App Forecasting’ predicts a ‘Critical’ status in 14 days”). This prediction will then appear in the System Insights dashboard in Windows Admin Center and generate Event Log entries just like the built-in capabilities.

Why A (Cannot do this) is Incorrect: This is factually incorrect. The extensibility of System Insights is a key design feature, allowing it to be adapted to any custom application or workloaD)

Why B (Create a DCR in Azure Monitor) is Incorrect: A Data Collection Rule (DCR) is for Azure Monitor and the Azure Monitor Agent (AMA). It defines what data to send to the cloud (a Log Analytics workspace). System Insights is a local, on-premises feature that runs its own ML models on the server. The two are completely separate monitoring platforms.

Why C (Use Add-StorageHealthSetting) is Incorrect: Add-StorageHealthSetting is a PowerShell cmdlet related to the Storage Health Service (part of Storage Spaces Direct). It is used to override or define health settings for physical disks, enclosures, and other storage components. It has no relationship to the System Insights predictive analytics feature.

Question 198. You are designing a high-availability and disaster-recovery solution using a single Windows Server 2022 failover cluster. The cluster will have 4 nodes in your primary data center (“SiteA”) and 4 nodes in your disaster recovery data center (“SiteB”). You will be using Storage Spaces Direct (S2D) for the storage. You need to configure the cluster so that data is written and fully acknowledged in both sites before an application receives the “write complete” confirmation. This will ensure zero data loss (RPO=0) in the event of a total failure of “SiteA”. What is this S2D cluster configuration called, and what replication mode does it use?

A) A “Multi-Cluster Set” using Asynchronous Replication.
B) A “Stretch Cluster” using Synchronous Replication.
C) A “Guest Cluster” using Hyper-V Replica.
D) A “Storage Replica Partnership” using Asynchronous Replication.

Correct Answer: B

Explanation: 

The correct answer is B, a “Stretch Cluster” using “Synchronous Replication”. This is the official Microsoft terminology for a single Storage Spaces Direct cluster whose nodes are geographically distributed across two sites and configured for site-level synchronous replication.

Why B (A “Stretch Cluster” using Synchronous Replication) is Correct: This scenario describes the “disaster recovery” use case for Storage Spaces Direct, which is a key feature of the AZ-801 exam.

Single Cluster: A “Stretch Cluster” is one single failover cluster. The 8 nodes (4 in SiteA, 4 in SiteB) are all members of the same cluster.

Site Awareness: You configure the cluster with “Fault Domains” (or “Sites”). You tell the cluster, “These 4 nodes are ‘SiteA'” and “These 4 nodes are ‘SiteB’.” This makes the cluster “site-aware.”

S2D Replication: When you create a volume on this stretch cluster, Storage Spaces Direct itself handles the replication. You are not using Storage Replica (Option D). S2D’s internal replication engine will be configured to “stretch” across the sites.

Synchronous Replication: To meet the “RPO=0” requirement, you must configure this stretched S2D volume to use Synchronous Replication. This means that when the application (running in SiteA) writes data to its virtual disk (CSV), S2D intercepts that write. It simultaneously writes the data to the S2D disks in SiteA and sends the write I/O across the network to be written to the S2D disks in SiteB) The application waits and does not receive the “write complete” acknowledgement until both sites have confirmed the data is safely written.

Result: This is a true “active-active” or “active-passive” (depending on workload placement) cluster. If SiteA fails completely, the cluster quorum (using a witness) will remain online, and the cluster will automatically start the workload on a node in SiteB) Since the data was replicated synchronously, the workload starts with zero data loss.

Why A (A “Multi-Cluster Set”) is Incorrect: “Cluster Sets” was a different, now-less-common technology for federating multiple, separate clusters together. A stretch cluster is one single cluster. Also, this option incorrectly states “asynchronous” replication, which would not provide RPO=0.

Why C (A “Guest Cluster”) is Incorrect: A “Guest Cluster” is a failover cluster that is built inside of virtual machines (which themselves may or may not be on a host cluster). It is a “virtualized cluster” and does not describe the physical S2D host topology.

Why D (A “Storage Replica Partnership”) is Incorrect: Storage Replica is a technology used to replicate volumes between two separate servers or separate clusters (a “Cluster-to-Cluster” scenario). In a “Stretch Cluster,” you do not use Storage Replica. Storage Spaces Direct (S2D) itself is “stretched” and handles the replication natively. This is a common point of confusion. You use Storage Replica OR S2D Stretch Cluster, not both for the same volume.

Question 199. You are deploying Azure Site Recovery (ASR) to protect your on-premises VMware environment. You have deployed the primary ASR appliance, the Configuration Server, which is registered with your Recovery Services vault and connected to your vCenter server. You now have two distinct requirements:

 

You have a set of high-churn, critical database servers that are generating 500 GB of new data per day. The embedded component on the Configuration Server is becoming a bottleneck.

  • You need to perform a “failback” for a VM that was previously failed-over to Azure. This VM must be replicated from Azure back to your on-premises vSphere environment. Which two ASR roles, which are embedded on the Configuration Server by default, can be “scaled-out” onto separate, dedicated servers to handle these two requirements?

A) Requirement 1: Scale-out Process Server, Requirement 2: Scale-out Master Target Server
B) Requirement 1: Scale-out Mobility Service, Requirement 2: Scale-out Failback Server
C) Requirement 1: Scale-out vCenter Server, Requirement 2: Scale-out Replication Gateway
D) Requirement 1: Scale-out Configuration Server, Requirement 2: Scale-out Master Target Server

Correct Answer: A

Explanation: 

The correct answer is A. The ASR Configuration Server appliance is a multi-role component. The two roles that can be “scaled-out” to separate servers to handle high load are the Process Server (for high-churn replication to Azure) and the Master Target Server (for high-volume failback from Azure).

Why A (Process Server and Master Target Server) is Correct: The ASR Configuration Server (CS) appliance, when deployed from the OVF template, contains three critical roles by default:

Configuration Server (CS): The “brain.” This component handles management, orchestration, and communication with the Azure portal and the on-premises vCenter. You can only have one of these.

Process Server (PS): The “workhorse” for replication to Azure. The Mobility Service on the source VMs sends all its replication data to the Process Server. The PS then caches, compresses, encrypts, and uploads this data to Azure.

Master Target Server (MT): The “catcher” for replication from Azure. This component is used only during failback. When failing back, the Azure VMs replicate their data to the Master Target Server, which then writes the data back to the on-premises vSphere environment (creating new VMDKs).

Addressing the Requirements:

Requirement 1 (High-Churn to Azure): The high-churn database servers are overwhelming the Process Server role. The data is coming in (from the Mobility Service) faster than the embedded PS can process and upload it. The solution is to deploy a scale-out Process Server on a new, dedicated VM. You then reconfigure the high-churn VMs to send their replication data to this new PS, alleviating the bottleneck on the CS.

Requirement 2 (Failback from Azure): The failback process involves replicating a VM from Azure to your on-premises environment. The component that receives this data is the Master Target Server (MT). For high-volume or parallel failbacks, the embedded MT can also become a bottleneck (I/O, network). The solution is to deploy a dedicated, scale-out Master Target Server to handle this failback data stream.

Therefore, the two roles that can be scaled-out are the Process Server (for “to-Azure” scale) and the Master Target Server (for “from-Azure” scale).

Why B, C, and D are Incorrect:

Mobility Service: This is the agent on the source VMs. It cannot be “scaled-out” in this context.

Failback Server / Replication Gateway: These are not the correct, official terms for the ASR components.

vCenter Server: This is a VMware component, not an ASR role.

Configuration Server: You cannot “scale-out” the Configuration Server. You can only have one CS per vault in a vSphere environment. You scale its sub-roles (the PS and MT).

Question 200. You are managing a large hybrid environment with 500 on-premises Windows Servers connected via Azure Arc-enabled servers. You are also using Microsoft Defender for Cloud to monitor the security posture of your entire hybrid fleet. When you view the “Secure Score” in the Defender for Cloud dashboard, you see a low score of 35%. You drill down and find that a majority of your on-premises servers are non-compliant with a recommendation called “System updates should be installed on your machines.” You need to understand which specific KBs (patches) are missing from a particular non-compliant server, “SRV-APP-01.” Where would you find this detailed, patch-level information? 

A) In the “Secure Score” blade of Microsoft Defender for ClouD
B) In the “Guest Configuration” extension blade for the Azure Arc server.
C) In the “Update Management” solution, linked from the server’s “Insights” blade.
D) In the “Workload protections” dashboard, under the “Recommendations” blade.

Correct Answer: C

Explanation: 

The correct answer is C The “System updates should be installed” recommendation in Microsoft Defender for Cloud is powered by the Update Management solution (either the Azure Automation-based one or the new, AMA-based one). To see the detailed, patch-by-patch compliance report, you must navigate to the Update Management interface.

Why C (In the “Update Management” solution) is Correct: This question tests the understanding of how different Azure services integrate.

Microsoft Defender for Cloud (MDC): This is the high-level “dashboard” for security. It ingests data from many sources to generate its “Secure Score” and “Recommendations.”

The “System updates” Recommendation: This specific recommendation is not generated by MDC itself. MDC is subscribing to the compliance data generated by a different service: Azure Automation Update Management (or the new Azure Update Manager).

The Source of Truth: The Update Management (UM) solution is the “source of truth” for patch compliance. It is the service that:

Configures the on-premises server (via the Log Analytics agent or Azure Monitor Agent) to scan against Windows Update or a WSUS server.

Collects the detailed results of that scan.

Stores this data in a Log Analytics workspace.

Generates a detailed, per-server, per-patch report showing exactly which updates (“KB-numbers”) are “Missing,” “Installed,” or “Pending.”

Finding the Data: When you are on the “SRV-APP-01” Azure Arc resource blade, you would navigate to the “Updates” or “Insights” (which links to “VM Insights”) blade. This will open the interface for the Update Management solution that is monitoring that server. In that interface, you can see a detailed “Compliance” tab that lists every single missing KB for that specific server. Defender for Cloud’s recommendation is just a summary or link back to this detailed data.

Why A (In the “Secure Score” blade) is Incorrect: The “Secure Score” blade provides a high-level, aggregated score for your entire subscription or management group. It will tell you that you have X servers that are non-compliant, but it will not show you a per-server, per-KB list. It is a management-level summary.

Why B (In the “Guest Configuration” blade) is Incorrect: The “Guest Configuration” extension is for Azure Policy Guest Configuration. This is used to audit and enforce state-based settings (e.g., “Is the WinRM service disabled?”). It is not the technology used by Microsoft for periodic, scan-based patch compliance.

Why D (In the “Workload protections” dashboard) is Incorrect: This is another high-level dashboard within Defender for ClouD) Like the Secure Score, it will show you the “System updates” recommendation, but it is not the source of the data. It will not have the granular, patch-level detail. It will simply tell you “This server is non-compliant” and provide a link to the actual Update Management solution.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!