Click here to access our full set of Microsoft AZ-801 exam dumps and practice tests.
Question 161. You are the lead administrator for a multi-site organization. You are planning to deploy a 4-node Storage Spaces Direct (S2D) cluster running Windows Server 2022. This cluster will host a new, mission-critical SQL Server workload that demands the highest possible random I/O performance and the lowest latency. The organization has stipulated that a storage efficiency of 50% is the target. The cluster must be able to sustain the failure of a single node without any data loss. Which Storage Spaces Direct resiliency setting should you implement for the volume that will host the SQL Server database files?
A) Three-way mirror
B) Mirror-accelerated parity
C) Nested two-way mirror
D) Two-way mirror
Correct Answer: D
Explanation:
The correct answer is D, Two-way mirror. This resiliency type perfectly aligns with all the specified constraints: a 4-node cluster, a 50% storage efficiency target, a single-node failure tolerance, and the need for high random I/O performance.
Why D (Two-way mirror) is Correct: A two-way mirror is a fundamental resiliency type in Storage Spaces Direct. When you configure a volume with a two-way mirror, Storage Spaces Direct creates two identical copies of all datA) Critically, it ensures that these two copies are placed on different physical servers (fault domains).
Performance: This resiliency type offers excellent performance, especially for random I/O workloads like a SQL Server database. A write operation is simple and computationally light; the system just has to write the data in two separate locations. This is significantly faster than the “read-modify-write” operations required by any form of parity, which involve complex parity calculations and incur a substantial write penalty. For random reads, the system can read from whichever of the two copies is “closest” or on the least-busy node, which can also improve read performance.
Storage Efficiency: A two-way mirror, by definition, stores two copies of everything. To store 1 TB of data, you must consume 2 TB of physical storage. This results in a precise 50% storage efficiency (1 TB data / 2 TB physical storage). This exactly matches the organizational target specified in the question.
Fault Tolerance: In a cluster with three or more nodes, a two-way mirror can sustain the failure of a single node. If one node (which holds one copy of the data) fails, the second copy of the data is still fully available on another node, allowing the SQL Server workload to continue running without any data loss or interruption. Since the scenario specifies a 4-node cluster, a two-way mirror is fully supported and meets the single-node failure tolerance requirement.
Therefore, the two-way mirror is the only option that satisfies all three key requirements: high performance for random I/O, a 50% storage efficiency, and tolerance for a single node failure in a 4-node cluster.
Why A (Three-way mirror) is Incorrect: A three-way mirror provides superior fault tolerance (surviving two simultaneous node failures) and even better random read performance. However, it fails to meet the storage efficiency requirement. A three-way mirror creates three copies of the data, meaning 1 TB of data consumes 3 TB of physical storage. This results in a 33.3% storage efficiency, which is far below the stated target of 50%. This would be an unnecessary and costly over-provisioning of storage.
Why B (Mirror-accelerated parity) is Incorrect: Mirror-accelerated parity is a hybrid resiliency type that offers a balance between performance and capacity. It writes new data to a small, fast “mirror” portion of the volume and then later rotates that data to a larger, more efficient “parity” portion. While this is an excellent choice for “general purpose” or “archive” workloads (like file servers or VDI), it is not the ideal choice for a sustained, high-performance database. The “absolute best I/O performance” is found in an all-mirror volume. Furthermore, the efficiency of this volume would be much greater than 50% (e.g., 66-80%), so it does not align with the stated target.
Why C (Nested two-way mirror) is Incorrect: Nested resiliency is a specialized, high-availability feature designed exclusively for two-node Storage Spaces Direct clusters. It is not a standard or supported resiliency type for a 4-node cluster. Its purpose is to allow a two-node cluster to survive multiple failures (e.g., one node and a disk in the other node). Applying this to a 4-node cluster is not the correct design.
Question 162. You are configuring a disaster recovery plan for a physical Windows Server 2019 server named “SRV-FINANCE” using Azure Site Recovery (ASR). This server is critical and has a high I/O workload. You have already deployed an on-premises ASR Configuration Server. During the “Enable Replication” wizard, you observe that the data replication from the on-premises server to the Azure cache storage account is slow and is falling behind the server’s data change rate. You need to scale out the ASR deployment to handle the high replication load for this server and future servers. What is the recommended component to deploy to address this performance bottleneck?
A) A scale-out Process Server
B) A second ASR Configuration Server
C) The Azure Recovery Services Agent (MARS)
D) A scale-out Master Target Server
Correct Answer: A
Explanation:
The correct answer is A, a scale-out Process Server. The Process Server is the component specifically responsible for receiving, compressing, encrypting, and forwarding replication data, and it is the designed “scale-out” component for handling high-churn workloads.
Why A (A scale-out Process Server) is Correct: In an Azure Site Recovery deployment for VMware virtual machines or physical servers, the Process Server role is the “workhorse” of data replication.
Process Server Function: The ASR Mobility Service (installed on the source physical server, “SRV-FINANCE”) captures all disk writes. It forwards these writes to the Process Server. The Process Server’s job is to:
Receive the replication datA)
Cache the data locally.
Compress the datA)
Encrypt the datA)
Transmit the encrypted, compressed data to the target cache storage account in Azure.
The Bottleneck: By default, the Process Server role is embedded on the ASR Configuration Server. For a small number of VMs or low-churn servers, this is sufficient. However, for a high I/O workload like the one described, this single, embedded Process Server becomes a bottleneck. Its CPU, memory, local disk, or network bandwidth can become saturated, and it cannot “keep up” with the data change rate (churn), causing the replication to lag.
The Solution (Scaling Out): The designed solution for this exact problem is to deploy a “scale-out Process Server.” This involves deploying a new, separate on-premises virtual machine and installing only the Process Server role on it. You then register this new Process Server with the existing ASR Configuration Server.
Re-balancing the Load: Once the new Process Server is available, you can go into the ASR vault settings for the protected server (“SRV-FINANCE”) and change its configuration to use the new, dedicated scale-out Process Server instead of the default one on the Configuration Server. This dedicates the full resources of a new VM to handling the replication traffic for “SRV-FINANCE,” which will alleviate the bottleneck and allow the replication to catch up. You can deploy multiple scale-out Process Servers and load-balance your high-churn servers across them.
Why B (A second ASR Configuration Server) is Incorrect: You can only have one ASR Configuration Server per vCenter or per physical server environment, linked to a single Recovery Services vault. The Configuration Server is the central management and orchestration huB) You cannot deploy a “second” one and have it work with the first. You scale the Process Server role, not the Configuration Server role.
Why C (The Azure Recovery Services Agent – MARS) is Incorrect: The MARS agent is for a completely different service: Azure Backup. It is used to back up files, folders, and system state from a server directly to an Azure vault. It has absolutely no role in the Azure Site Recovery (ASR) replication process, which performs full-machine, block-level replication for disaster recovery.
Why D (A scale-out Master Target Server) is Incorrect: The Master Target Server is another role that is, by default, embedded on the Configuration Server. However, this role is used exclusively for failback—the process of replicating data from Azure back to your on-premises environment after a disaster. It has no role in the initial replication from on-premises to Azure. Scaling it out would not solve a problem with the primary replication to the cloud.
Question 163. You are managing a hybrid environment with on-premises Windows Server 2022 servers connected to Azure via the Azure Arc-enabled servers agent. A new security mandate requires that all servers have a specific registry key, HKLM\SOFTWARE\Policies\Contoso\Security\EnforceMode, set to a DWORD value of 1. You must implement a solution using Azure that continuously audits all 500 of your on-premises servers for this setting. If a server is found to be non-compliant, it must be reported on a central dashboard. Which Azure service is designed to perform this type of in-guest configuration auditing?
A) Microsoft Defender for Cloud
B) Azure Automation State Configuration
C) Azure Policy using a Guest Configuration assignment
D) Azure Monitor using Data Collection Rules (DCRs)
Correct Answer: C
Explanation:
The correct answer is C, Azure Policy using a Guest Configuration assignment. This is the specific Azure service designed to audit and, optionally, enforce configuration settings inside the guest operating system of both Azure VMs and Azure Arc-enabled servers.
Why C (Azure Policy using Guest Configuration) is Correct: This solution provides the exact functionality requested.
Azure Arc Prerequisite: The servers are already onboarded with Azure ArC) This is the foundational step, as it makes the on-premises servers appear as Azure resources.
Azure Policy: This is Azure’s “governance as code” service. It allows you to define and assign policies that your resources must adhere to.
Guest Configuration (GC): Standard Azure Policy can only check the properties of the Azure resource itself (e.g., its tags, its region, its SKU). To look inside the operating system (to check files, services, or registry keys), you must use the Guest Configuration (GC) feature. The Azure Arc agent installs a “Guest Configuration extension” on the server to manage this.
Audit Mode: You would create (or use a built-in) Guest Configuration policy definition that specifically checks for the existence and value of the HKLM\SOFTWARE\Policies\Contoso\Security\EnforceMode registry key. You would then assign this policy to the resource group or subscription containing all your Azure Arc-enabled servers, with the “effect” set to Audit.
Central Dashboard: The Guest Configuration agent on each server will periodically evaluate this policy. It will then report its status (“Compliant” or “Non-Compliant”) back to the Azure Policy service. This populates a central compliance dashboard in the Azure portal, where you can see, in real-time, which of the 500 servers are non-compliant.
This provides the exact, continuous, in-guest auditing and centralized reporting solution the question asks for.
Why A (Microsoft Defender for Cloud) is Incorrect: Microsoft Defender for Cloud is a broad security posture management (CSPM) and workload protection (CWP) platform. While it uses Azure Policy to report on security misconfigurations (like “MFA should be enabled”), it is not the underlying engine you would use to create a custom audit for a specific registry key. It consumes the data from Azure Policy; it is not the tool you use to define this specific audit.
Why B (Azure Automation State Configuration) is Incorrect: Azure Automation State Configuration is a PowerShell Desired State Configuration (DSC) “pull server” hosted in Azure. You could use this to enforce the setting. However, its primary purpose is enforcement (“make this server compliant”), not auditing and reporting in a central dashboard. Azure Policy Guest Configuration is the modern, preferred solution for at-scale audit and governance, as it integrates directly with the Azure Policy compliance platform.
Why D (Azure Monitor using Data Collection Rules – DCRs) is Incorrect: Azure Monitor is for collecting and analyzing telemetry—logs and metrics. A Data Collection Rule (DCR) is used to define what data to collect (e.g., “collect the ‘System’ event log” or “collect the ‘% Processor Time’ counter”). It is not a policy engine. You cannot use a DCR to check the state of a registry key and report on its compliance. It is a “data-in” service, not a “state-audit” service.
Question 164. Your organization has a central Azure File Share that serves as the authoritative source for all user home drives. This data is synchronized to a Windows Server 2019 file server at your main office using Azure File SynC) This server, “HQ-FS01,” has a 10 TB volume dedicated to the sync, and the “Volume Free Space Policy” for cloud tiering is set to 20%. The total data in the Azure File Share is 8 TB) A user now opens a 100 GB file on “HQ-FS01” that had been tiered (was cold). The server’s local volume currently has 2.5 TB (25%) of free space. What is the expected behavior of the Azure File Sync agent in this scenario?
A) The file will fail to open because the Volume Free Space Policy of 20% is already met.
B) The agent will recall the 100 GB file and, after the recall, will do nothing because 2.4 TB (24%) of free space still meets the 20% policy.
C) The agent will recall the 100 GB file and then immediately tier 100 GB of other cold files to maintain exactly 25% free space.
D) The agent will first tier 100 GB of cold files and then recall the requested file to maintain the 2.5 TB of free space.
Correct Answer: B
Explanation:
The correct answer is B) The “Volume Free Space Policy” acts as a minimum threshold that triggers tiering when breached, not as a constant, exact target that must be actively maintained.
Why B (The agent will recall the 100 GB file… and do nothing) is Correct: This question tests the understanding of how the Volume Free Space Policy in Azure File Sync actually functions.
The Policy: The “Volume Free Space Policy” is set to 20%. This means the Azure File Sync agent’s goal is to ensure that the free space on the 10 TB volume never drops below 2 TB (20% of 10 TB).
The Current State: The volume currently has 2.5 TB (25%) of free space. The server is healthy and above its minimum threshold.
The Action (File Recall): A user requests a 100 GB tiered file. The Azure File Sync agent’s file system filter (storagesynC)sys) intercepts this request and begins recalling the file from the Azure File Share. The 100 GB file is downloaded and fully rehydrated on the local volume.
The New State: After the 100 GB recall is complete, the volume’s free space is now 2.5 TB – 100 GB = 2.4 TB) This is 24% of the total volume size.
The Agent’s Decision: The agent’s background cloud tiering process (which runs periodically, typically every hour) will check the volume’s free space. It will see 2.4 TB (24%) of free space. It will compare this to its minimum policy of 20%. Since 24% is greater than 20%, the policy is not breached. There is no “pressure” on the volume. Therefore, the agent will do nothing. It will not proactively tier any other files because its minimum free space goal is still being met.
The Volume Free Space Policy is a reactive control to prevent the disk from filling up, not a proactive one to maintain an exact free space percentage.
Why A (The file will fail to open…) is Incorrect: The policy being “met” does not block file access. The policy is a background management task. A user’s on-demand recall request will always be honored as long as there is physical space on the disk to download the file.
Why C (The agent will recall… and immediately tier 100 GB of other files…) is Incorrect: This describes a “quota” or “exact target” behavior, which is not how the policy works. The agent does not try to maintain the exact free space it had before the recall. It only cares about staying above the 20% minimum.
Why D (The agent will first tier… and then recall…) is Incorrect: This would result in a terrible user experience. A user’s on-demand file recall is a high-priority, foreground operation. The agent will not make the user wait while it first “makes room” by tiering other files (a slow background process), especially when there is already 2.5 TB of free space available. The recall happens first, and the tiering process evaluates the disk later.
Question 165. You are planning a two-node, hyper-converged Storage Spaces Direct (S2D) cluster for a remote office. The cluster will run Windows Server 2022. The primary design goal is maximum resilience, as there is no on-site IT staff. The cluster must be able to keep all storage volumes online even if one entire node fails and a single disk in the remaining, healthy node fails at the same time. Which Storage Spaces Direct resiliency feature is specifically designed to provide this level of fault tolerance for a two-node cluster?
A) Three-way mirror
B) Dual parity
C) Nested resiliency
D) Mirror-accelerated parity
Correct Answer: C
Explanation:
The correct answer is C, Nested resiliency. This feature was introduced in Windows Server 2019 specifically to add enhanced fault tolerance to two-node S2D clusters, addressing the exact failure scenario described.
Why C (Nested resiliency) is Correct: A standard two-node S2D cluster using a “two-way mirror” is vulnerable. It has two copies of data (one on each node). If Node 1 fails, the cluster stays online, but it is now running with only one copy of the data (on Node 2). If a single disk on Node 2 fails before Node 1 is recovered, all data is lost. This is a significant single point of failure.
Nested resiliency was created to solve this. It is a “mirror of mirrors” or, more accurately, a mirror of local-parity sets. It works by combining two types of resiliency:
Node-level Mirror: It first creates a two-way mirror, placing one copy of the data on Node 1 and the second copy on Node 2. This is the same as a standard two-way mirror and provides resilience against a single node failure.
Intra-Node Parity: In addition, within each node, the data on that node is written using local two-way mirror or mirror-accelerated parity across the drives inside that node. (Note: The original implementation in 2019 was a local two-way mirror, later refined. The key is that there is local resiliency on each node).
This “nested” approach provides two layers of protection:
If an entire node (Node 1) fails, the cluster stays online, running on Node 2. The data on Node 2 is still resilient because it has its own local mirror or parity.
If a single disk on Node 2 fails while Node 1 is still down, the local resiliency on Node 2 can rebuild the data from the failed disk using its other disks.
This architecture directly survives the “one node failure and a single disk failure in the remaining node” scenario, which is why it is the perfect solution for a high-resilience, two-node edge deployment.
Why A (Three-way mirror) is Incorrect: A three-way mirror is not a valid or possible option for a two-node cluster. It requires a minimum of three nodes (fault domains) to store the three required copies of the datA)
Why B (Dual parity) is Incorrect: Dual parity is also not a valid option for a two-node cluster. Parity-based resiliency in S2D requires a minimum of four nodes to function correctly and provide fault tolerance.
Why D (Mirror-accelerated parity) is Incorrect: This is a standard resiliency type that balances performance and capacity, but it does not, by itself, solve the “two-node” problem. A standard mirror-accelerated parity volume on a two-node cluster would still be a two-way mirror at the node level, and would still be vulnerable to the “node failure + disk failure” scenario. “Nested” is the specific feature that adds the second layer of protection.
Question 166. You are configuring a hybrid disaster recovery solution for your on-premises VMware vSphere environment. You are using Azure Site Recovery (ASR) to replicate your VMware virtual machines to Azure. You have deployed and configured the ASR Configuration Server. What is the next component you must install, and where must it be installed, to begin capturing and sending disk I/O from the source virtual machines?
A) The Azure Monitor Agent, installed on the Configuration Server.
B) The ASR Mobility Service, installed on each VMware virtual machine you want to protect.
C) The Azure Recovery Services Agent (MARS), installed on the vCenter server.
D) The ASR Provider, installed on each ESXi host.
Correct Answer: B
Explanation:
The correct answer is B) For VMware and physical server replication, the ASR Mobility Service is the agent that must be installed inside the guest operating system of each machine you want to protect.
Why B (The ASR Mobility Service…) is Correct: The ASR replication process for VMware (and physical servers) is agent-based, which is a key difference from Hyper-V replication.
Configuration Server (Step 1): You have already deployed the Configuration Server. This server acts as the on-premises management huB) It communicates with your vCenter Server to discover the inventory of virtual machines.
Mobility Service (Step 2): Once the Configuration Server can see the VMs, the next step is to install the ASR Mobility Service on the VMs you want to protect. This is a lightweight agent that runs inside the guest OS (Windows or Linux) of the source VMware VM.
Function: The Mobility Service’s sole purpose is to capture all disk write I/O (data changes) as they happen in real-time. It then forwards this data to the ASR Process Server (which is, by default, on the Configuration Server).
Installation: This installation is typically done via a “push” mechanism from the Configuration Server. You select the discovered VMs in the ASR portal, and the Configuration Server (using credentials you provide) reaches out to the guest OS of each VM, copies the installer, and installs the agent.
Without the Mobility Service installed inside the source VM, ASR has no way to “see” or “capture” the data changes, and replication cannot begin.
Why A (The Azure Monitor Agent…) is Incorrect: The Azure Monitor Agent (AMA) is a monitoring agent. It collects logs and performance metrics for Azure Monitor. It has no role in the Azure Site Recovery data replication pipeline.
Why C (The Azure Recovery Services Agent – MARS…) is Incorrect: The MARS agent is for Azure Backup. It backs up files and folders. It is not used for ASR, which is a disaster recovery service that replicates entire machines.
Why D (The ASR Provider…) is Incorrect: The “ASR Provider” (specifically, the “Microsoft Azure Site Recovery Provider for SCVMM”) is the component you install on a System Center Virtual Machine Manager (SCVMM) server to orchestrate replication for Hyper-V VMs. It is not used in a VMware deployment. The component that communicates with vCenter is the Configuration Server itself.
Question 167. Your company is migrating a legacy, physical file server running Windows Server 2008 to a new virtual machine running Windows Server 2022. The legacy server’s hardware is failing. The Storage Migration Service is not an option for this scenario. Your goal is to perform a Physical-to-Virtual (P2V) conversion. You need to create a VHDX file from the live, running physical server. This process must use a VSS snapshot to ensure the VHDX is crash-consistent. Which Microsoft Sysinternals utility is the standard, recommended tool for this task?
A) Robocopy
B) Diskpart
C) Disk2vhd
D) System Insights
Correct Answer: C
Explanation:
The correct answer is C, Disk2vhd. This is a free, lightweight, and powerful utility from the official Windows Sysinternals suite, created by Microsoft, and is the go-to tool for performing simple P2V (Physical-to-Virtual) conversions.
Why C (Disk2vhd) is Correct: Disk2vhd is specifically designed for the exact scenario described.
Microsoft Utility: It is part of the trusted Sysinternals toolkit, which is owned and distributed by Microsoft.
Live P2V (Online): Its primary feature is that it can be run on a live, running physical server. You do not need to take the server offline to perform the imaging, which is critical for minimizing downtime.
VSS Integration: The tool requires and uses the Volume Shadow Copy Service (VSS). It instructs VSS to create a point-in-time, crash-consistent (or application-consistent, if VSS writers are available) snapshot of the physical disks. It then reads the data from this static snapshot, not from the live, changing production volumes. This is crucial for creating a stable, bootable virtual disk.
VHDX Output: It directly creates a VHD or VHDX file, which is the native virtual hard disk format used by Hyper-V.
Simple Workflow: The migration process becomes:
Run Disk2vhd.exe on the physical Windows Server 2008 machine.
Select the volumes to include (e.g., the system and data drives).
Save the resulting .vhdx file to a network share or external drive.
In Hyper-V on your Windows Server 2022 host, create a new (Generation 1, for Server 2008) virtual machine.
When creating the VM, select “Use an existing virtual hard disk” and point it to the VHDX file you just created.
Boot the VM, uninstall old hardware drivers, and install Hyper-V Integration Services.
This tool is the simplest, most direct, and most commonly recommended method for this task.
Why A (Robocopy) is Incorrect: Robocopy (Robust File Copy) is a file-level copy utility. It is used for copying files and folders from one location to another. It cannot create a bootable, block-level disk image (VHDX) of an entire operating system partition.
Why B (Diskpart) is Incorrect: Diskpart is a command-line utility for managing disks, partitions, and volumes. You can use it to create, format, and delete partitions, but it has no capability to image a disk into a VHDX file, especially not from a live VSS snapshot.
Why D (System Insights) is Incorrect: System Insights is a performance monitoring and predictive analytics feature. Its purpose is to forecast future resource consumption (CPU, storage, etC)). It has absolutely no function related to disk imaging or P2V migrations.
Question 168. You are managing a hybrid environment using Windows Admin Center (WAC) installed in gateway mode on a server named “WAC-GW”. A new security policy mandates that all administrators connecting to the WAC web interface must authenticate using their on-premises Active Directory credentials and a Multi-Factor Authentication (MFA) challenge from Azure AD. You have already configured Azure AD Connect. What is the correct way to enforce this MFA requirement for the WAC gateway itself?
A) Enforce MFA for the “WAC-GW” server’s computer object in Active Directory.
B) Register the WAC gateway as an application in Azure AD, configure WAC to use Azure AD for authentication, and create a Conditional Access policy.
C) Install the Azure AD Application Proxy connector on “WAC-GW” to publish the WAC URL.
D) Deploy Microsoft Defender for Identity sensors on your domain controllers.
Correct Answer: B
Explanation:
The correct answer is B) Windows Admin Center (WAC) has a native, built-in integration with Azure Active Directory (Azure AD) for gateway authentication, which can then be protected by Conditional Access policies.
Why B (Register WAC with Azure AD… and use Conditional Access) is Correct: This option describes the modern, supported, and most secure method for protecting the Windows Admin Center gateway.
Default Authentication: By default, a WAC gateway uses standard Windows Authentication (Kerberos or NTLM) against your on-premises Active Directory. This prompts for a username and password but does not support MFA)
Azure AD Integration: WAC can be reconfigured to use Azure AD as its identity provider for the gateway itself. This is an explicit setting you configure during or after installation. This process involves:
App Registration: You must first create an “App Registration” in your Azure AD tenant. This registration defines the WAC gateway as a trusted application that can use Azure AD for sign-in. You configure its redirect URI (the WAC gateway’s URL).
Configure WAC: You then provide the details of this App Registration (like the Application ID and Tenant ID) to the WAC gateway.
Enforce MFA with Conditional Access: Once WAC is redirecting users to Azure AD for authentication, you can apply any standard Azure AD security feature. The primary tool for enforcing MFA is Conditional Access. You create a new Conditional Access policy in the Azure AD portal. The policy logic would be:
Users: Apply to “All administrators”.
Cloud App: Target the “Windows Admin Center” App Registration you created.
Grant Controls: “Require multi-factor authentication”.
After this is saved, any administrator attempting to access the WAC URL will be redirected to the Azure AD sign-in page, be required to enter their credentials, and then be forced to complete an MFA challenge (e.g., from their authenticator app) before Azure AD will issue a token and grant them access to the WAC gateway.
Why A (Enforce MFA for the server’s computer object) is Incorrect: This is not a valid concept. You enforce MFA for user accounts, not computer objects. The server’s computer object has an identity in AD, but it’s not what is used for the user’s interactive web sign-in.
Why C (Install the Azure AD Application Proxy…) is Incorrect: The Azure AD Application Proxy is a service for publishing an internal, on-premises web application to the external internet, while pre-authenticating users with Azure AD. While you could publish WAC this way, it is not the native or recommended method. WAC has its own built-in Azure AD integration (as described in option B), which is the more direct and intended configuration.
Why D (Deploy Microsoft Defender for Identity…) is Incorrect: Microsoft Defender for Identity is a threat detection platform. It monitors authentication traffic for signs of an attack (like Pass-the-Hash). It does not enforce authentication methods like MFA for a web application.
Question 169. You are an administrator for a Windows Server 2022 failover cluster that hosts several dozen Hyper-V virtual machines. You need to perform scheduled monthly maintenance, which involves installing security patches and rebooting every node in the cluster. You must automate this entire process. The solution must, for each node, one at a time, automatically drain all running virtual machines (using Live Migration) to other nodes, apply the updates, reboot the node, and then bring it back into service before proceeding to the next node. Which feature or technology is specifically designed for this automated, cluster-safe patching workflow?
A) Windows Admin Center (WAC)
B) Cluster-Aware Updating (CAU)
C) Windows Server Update Services (WSUS)
D) Azure Automation Update Management
Correct Answer: B
Explanation:
The correct answer is B, Cluster-Aware Updating (CAU). This is a feature built directly into Windows Server Failover Clustering for the express purpose of automating the patching of cluster nodes while maintaining service availability.
Why B (Cluster-Aware Updating – CAU) is Correct: Cluster-Aware Updating (CAU) provides an automated, “cluster-aware” solution for the entire update process. When an “Updating Run” is initiated (either manually or on a predefined schedule), CAU orchestrates the following complex workflow for each node in the cluster, one at a time:
Selects a Node: CAU selects the first node to update.
Drains Roles: This is the most critical step. CAU automatically places the node into cluster maintenance mode. Placing a node in maintenance mode automatically triggers a live migration of all running virtual machines (or other cluster roles) from that node to other available nodes in the cluster. This is done gracefully and without downtime for the VMs.
Applies Updates: Once the node is empty (has no active roles), CAU instructs the node to download and install the required updates. This can be from Windows Update, Microsoft Update, or an internal WSUS server.
Reboots (if necessary): If the updates require a reboot, CAU manages the restart of the node.
Rejoins Cluster: After the node comes back online, CAU verifies that it is healthy and brings it out of maintenance mode, making it available to host cluster roles again.
Repeats: CAU then moves to the next node in the cluster and repeats the entire process until all nodes are fully patched.
This workflow precisely matches the requirements: it’s automated, it handles draining and moving roles (VMs), and it minimizes disruption. CAU can be configured in “self-updating” mode to run on a schedule without any administrator intervention.
Why A (Windows Admin Center – WAC) is Incorrect: Windows Admin Center is a management interface. It has an “Updates” extension, and for a cluster, this extension uses CAU on the back-end. You can use the WAC GUI to configure and run CAU. However, the underlying technology that actually performs the cluster-aware orchestration is CAU itself. WAC is the “wrapper,” but CAU is the “engine.”
Why C (Windows Server Update Services – WSUS) is Incorrect: WSUS is a repository and approval system for Windows updates. You would configure your cluster nodes (and all other servers) to get their updates from your WSUS server instead of the public Microsoft Update servers. This gives you control over which patches are approved. However, WSUS has no awareness of a failover cluster. It cannot orchestrate the draining of roles, placing nodes in maintenance mode, or patching nodes sequentially. If WSUS pushed updates to all nodes at once, they might all try to reboot simultaneously, causing a complete cluster outage. CAU uses WSUS as its update source, but CAU is the orchestrator.
Why D (Azure Automation Update Management) is Incorrect: Azure Automation Update Management is a cloud-based solution (part of Azure) for patching Azure VMs and on-premises servers (via Azure Arc). While it is a powerful tool, it is not natively cluster-aware in the same way CAU is. To make it “cluster-aware,” you would have to write complex “pre” and “post” scripts (e.g., PowerShell runbooks) that manually put the node in maintenance mode and take it out. CAU is the built-in, on-premises solution that handles all this complexity automatically.
Question 170. Your security team wants to mitigate credential theft attacks, such as Pass-the-Hash and Pass-the-Ticket, on your Windows Server 2022 domain controllers. You plan to implement Credential Guard. This feature relies on Virtualization-Based Security (VBS) to isolate the Local Security Authority Subsystem Service (LSASS). Which of the following is a mandatory hardware prerequisite for enabling VBS and Credential Guard?
A) A network adapter that supports RDMA (RoCE or iWARP).
B) A Trusted Platform Module (TPM) version 2.0.
C) An HBA (Host Bus Adapter) with secure boot capabilities.
D) A self-encrypting drive (SED) with BitLocker enabled.
Correct Answer: B
Explanation:
The correct answer is B, a Trusted Platform Module (TPM) version 2.0. This is a critical hardware component required to securely boot the system and protect the keys used by Virtualization-Based Security (VBS).
Why B (A TPM 2.0) is Correct: Credential Guard is not a simple software setting; it is a hardware-rooted security feature. It leverages Virtualization-Based Security (VBS), which uses the Hyper-V hypervisor to create a small, isolated “Virtual Secure Mode” (VSM) that is completely cut off from the main Windows kernel. The LSASS process, which stores credentials, is moved into this VSM. To ensure this VSM is secure and cannot be tampered with, a chain of trust must be established from the moment the power button is pressed.
UEFI and Secure Boot: The system must boot in UEFI mode (not legacy BIOS) and have Secure Boot enabled. Secure Boot ensures that only a Microsoft-signed bootloader can execute, preventing boot-time rootkits.
TPM 2.0: The TPM is a hardware “root of trust.” It is a dedicated crypto-processor on the motherboard. It plays two vital roles for VBS:
Attestation: It “measures” the boot process. It takes cryptographic hashes of the firmware, the bootloader, and the kernel as they load.
Key Protection: It securely “seals” (encrypts) the keys that VBS uses to protect its isolated memory. These keys can only be “unsealed” (decrypted) by the TPM if the boot measurements (hashes) are exactly the same as they were when the keys were sealed. This process guarantees that if any component of the boot process has been tampered with (e.g., by a rootkit), the TPM will refuse to release the VBS keys, and Credential Guard will not start, thereby protecting the credentials from the compromised environment. While VBS can be enabled without a TPM (for testing), it is not secure and not supported for production. A TPM 2.0 is the mandatory hardware anchor.
Why A (A network adapter that supports RDMA) is Incorrect: RDMA (Remote Direct Memory Access) is a high-performance networking technology used for fast storage (like S2D) or remote procedure calls. It has absolutely no relationship to VBS, Credential Guard, or system security.
Why C (An HBA with secure boot capabilities) is Incorrect: An HBA (Host Bus Adapter) is a card used to connect to a SAN (Storage Area Network). While some HBAs have their own firmware, the “secure boot” that VBS relies on is the main system UEFI Secure Boot, not a component-level one.
Why D (A self-encrypting drive – SED) is Incorrect: A SED is a hard drive that automatically encrypts all data written to it. This is a form of data-at-rest encryption. While it is a good security practice (as is BitLocker, which is a software-based equivalent), it is not a prerequisite for Credential Guard. Credential Guard protects credentials in memory (in-use), while SED/BitLocker protects data on the disk (at-rest). They solve different problems.
Question 171. Your organization has a central Azure File Share and 20 branch offices. You are using Azure File Sync to provide a local cache of the data at each branch on a Windows Server. The total dataset in the cloud is 30 TB) The branch office servers have limited disk space, and you want to ensure that files not accessed in the last 60 days are proactively tiered (purged from the local cache) to save space, regardless of the amount of free space on the volume. Which Azure File Sync cloud tiering policy should you configure?
A) The Volume Free Space Policy
B) The Initial Download Policy
C) The Date Policy
D) The Local Cache Eviction Policy
Correct Answer: C
Explanation:
The correct answer is C, The Date Policy. This is the specific cloud tiering policy designed to proactively tier files based on their access time, independent of the volume’s free space.
Why C (The Date Policy) is Correct: Azure File Sync’s cloud tiering feature has two distinct policies that work together:
Volume Free Space Policy: This is the reactive policy. It defines a minimum percentage of free space to maintain on the volume (e.g., “keep 20% free”). The agent’s tiering process will only run if the free space drops below this threshold. This is the primary mechanism to prevent the local disk from filling up.
Date Policy: This is the proactive policy. When enabled, it tells the agent to tier any file that has not been accessed (read or written to) within a specified number of days (e.g., 60 days). This policy will run and tier old files even if the Volume Free Space Policy is not breached.
The scenario explicitly asks for a proactive policy to tier files based on an access time (“not accessed in the last 60 days”) regardless of the volume’s free space. This is the precise definition and function of the Date Policy. Enabling this policy ensures that “cold” data is automatically cleaned up from the branch office caches, minimizing local storage consumption even when the disks are not yet full.
Why A (The Volume Free Space Policy) is Incorrect: The Volume Free Space Policy is reactive. If this policy was set to 20% and the volume currently had 40% free space, the agent would not tier any files, even if they were a year old. This does not meet the requirement to proactively tier files based on their age.
Why B (The Initial Download Policy) is Incorrect: The Initial Download Policy is a setting for a new server endpoint. It determines how the server is hydrated for the first time. The options are “Namespace only” (download all the file names/folders, but tier all the data) or “Namespace first, then content” (download the namespace, and then download all the file content as well). This is a one-time setting for initial setup, not an ongoing tiering policy.
Why D (The Local Cache Eviction Policy) is Incorrect: “Local Cache Eviction Policy” is a descriptive, general term, but it is not the actual name of the feature within Azure File SynC) The correct, official terms for the two policies are “Volume Free Space Policy” and “Date Policy.”
Question 172. You are migrating a 10 TB file server named “FS-OLD” (running Windows Server 2012) to a new server “FS-NEW” (running Windows Server 2022). You are using the Storage Migration Service (SMS) in Windows Admin Center. You have already completed the “Inventory” and “Transfer” phases. The data is fully synced. You are now ready for the final step during a weekend maintenance window. Which of the following actions is not performed by the Storage Migration Service during the “Cutover” phase?
A) The Storage Migration Service assigns the IP address(es) of “FS-OLD” to “FS-NEW”.
B) The Storage Migration Service renames the “FS-OLD” server to a new, randomized name.
C) The Storage Migration Service renames the “FS-NEW” server to “FS-OLD”.
D) The Storage Migration Service replicates the NTFS permissions to the “FS-NEW” server.
Correct Answer: D
Explanation:
The correct answer is D. The replication of NTFS permissions is a core part of the “Transfer” phase, not the “Cutover” phase. The Cutover phase is focused exclusively on the identity and network configuration swap.
Why D (Replicating NTFS permissions) is Incorrect for the Cutover Phase: The Storage Migration Service (SMS) operates in three distinct, sequential phases. It is critical to understand what happens in each.
Inventory: SMS scans the source server(s) and inventories all volumes, files, shares, and configurations.
Transfer: This is the long-running “data copy” phase. During this phase, SMS performs the bulk transfer of all files and folders from the source to the destination. This is when it explicitly copies all the data and its associated NTFS ACLs (permissions). This phase is idempotent and can be re-run multiple times to perform a delta-sync, copying only the files that have changed. The permissions are already on the “FS-NEW” server before the cutover begins.
Cutover: This is the final, very fast “identity swap” phase that you execute during the maintenance window. This phase assumes the data and permissions are already in place. The cutover phase performs the following steps in sequence:
It stops the services on the source server (“FS-OLD”).
It renames the source server (“FS-OLD”) to a new, randomly generated name (as described in option B).
It transfers the IP address(es) from the source server to the destination server (“FS-NEW”) (as described in option A).
It renames the destination server (“FS-NEW”) to take on the original name of the source server (“FS-OLD”) (as described in option C).
It recreates all the SMB share configurations on the newly-named destination server.
It restarts the server.
Therefore, replicating NTFS permissions is a “Transfer” phase task, not a “Cutover” phase task.
Why A, B, and C (IP assignment and renames) are Correct for the Cutover Phase: These options describe the very definition of the cutover. The entire purpose of the cutover is to have the new server “FS-NEW” impersonate the old server “FS-OLD” so that no client-side changes are needed. To do this, it must take its name (option C) and its IP address (option A), and to prevent a name conflict on the network, it must first rename the old server to something else (option B).
Question 173. You are managing a highly secure on-premises Active Directory environment. You want to implement a cloud-based solution that can detect identity-based threats. Your goal is to monitor all NTLM and Kerberos authentication traffic coming to your domain controllers, use User and Entity Behavior Analytics (UEBA) to build a baseline of normal activity, and then generate alerts for anomalies such as Pass-the-Hash, Pass-the-Ticket, and Golden Ticket attacks. Which Microsoft security service is specifically designed for this purpose?
A) Microsoft Defender for Cloud
B) Azure AD Identity Protection
C) Microsoft Defender for Identity
D) Azure AD Password Protection
Correct Answer: C
Explanation:
The correct answer is C, Microsoft Defender for Identity. This is a cloud-based security solution that is purpose-built to protect on-premises Active Directory Domain Services (AD DS) environments by analyzing authentication traffic and user behavior.
Why C (Microsoft Defender for Identity) is Correct: Microsoft Defender for Identity (formerly known as Azure Advanced Threat Protection or Azure ATP) is a User and Entity Behavior Analytics (UEBA) solution for AD DS. Its entire architecture and function match the scenario:
On-Premises Monitoring: It works by deploying a lightweight “Defender for Identity sensor” directly onto your on-premises domain controllers.
Traffic Analysis: This sensor non-intrusively monitors the network traffic (e.g., NTLM, Kerberos, DNS, RPC) destined for the domain controller and also parses relevant Windows Event Logs related to authentication.
Cloud-Based UEBA: The sensor sends this metadata (not the full packets, just the relevant security data) to the Microsoft Defender for Identity cloud service. This service uses machine learning and behavioral analytics (UEBA) to build a “normal” profile for every user and device (e.g., “This user normally logs on from these 3 machines and accesses these 5 servers between 9-5”).
Threat Detection: When the service detects an anomaly (e.g., the user’s credentials are suddenly used from an unknown server) or a known attack pattern, it generates a high-fidelity alert. It has specific detection capabilities for “Pass-the-Hash,” “Pass-the-Ticket,” “Golden Ticket,” “DCSync,” reconnaissance (e.g., “Active Directory Enumeration”), and other lateral movement techniques.
This provides the exact, cloud-based, behavior-driven threat detection for on-premises Active Directory that the question describes.
Why A (Microsoft Defender for Cloud) is Incorrect: Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection (CWP) solution. It focuses on the security configuration of your Azure resources (like VMs, Storage, SQL) and hybrid servers (via Arc). It will tell you if a server is vulnerable (e.g., “Missing Patches” or “RDP port open to internet”), but it is not the specialized UEBA tool for analyzing Active Directory authentication protocols.
Why B (Azure AD Identity Protection) is Incorrect: Azure AD Identity Protection is a very similar service, but it is for Azure Active Directory (Azure AD). It analyzes sign-in logs and behavior for cloud identities. It detects cloud-centric risks like “impossible travel,” “logon from anonymous IP,” or “leaked cloud credentials.” It has no visibility into your on-premises NTLM or Kerberos traffiC
Why D (Azure AD Password Protection) is Incorrect: Azure AD Password Protection is a feature that prevents users from setting weak or compromised passwords. It checks new passwords against a global list of banned and known-leaked passwords. It is a preventative hygiene tool, not a real-time threat detection service for authentication traffic
Question 174. You are deploying a new 6-node Windows Server 2022 failover cluster. The cluster will host a SQL Server 2019 availability group. You need to configure a quorum witness for the cluster. The cluster nodes are all located in a single on-premises data center. The business has a “cloud-first” policy and wants to avoid using on-premises file shares or dedicated storage LUNs for management tasks. What is the recommended, most resilient, and easiest-to-configure witness type for this scenario?
A) A File Share Witness
B) A Disk Witness
C) A Cloud Witness
D) A Node Majority Quorum (no witness)
Correct Answer: C
Explanation:
The correct answer is C, a Cloud Witness. Given the “cloud-first” policy and the desire to avoid on-premises infrastructure for the witness, a Cloud Witness is the modern, recommended solution.
Why C (A Cloud Witness) is Correct: A cluster’s quorum model determines how many “votes” it needs to stay online. In a cluster with an even number of nodes (like the 6-node cluster here), you must have a witness to act as a “tie-breaker.” This witness provides the 7th vote, allowing the cluster to sustain a 3-node failure and remain online with 4 votes (3 nodes + 1 witness).
Cloud-Based: A Cloud Witness is a new witness type that leverages Microsoft Azure. It is incredibly simple to set up.
Azure Storage: It uses a standard Azure Storage Account (which is extremely cheap and resilient) to store a single, tiny “blob” file. The cluster nodes read and write to this blob to “lock” the witness and determine which nodes are in the majority.
Resilience: It is highly resilient. The storage account is managed by Azure, (e.g., LRS, GRS) providing high availability that is completely independent of your on-premises data center. If your on-premises file server (for a File Share Witness) or SAN (for a Disk Witness) fails, the cluster quorum can be impacted. A Cloud Witness is not subject to these on-premises failures.
Meets Requirements: It perfectly aligns with the “cloud-first” policy and explicitly avoids the use of “on-premises file shares” (ruling out option A) or “dedicated storage LUNs” (ruling out option B). It is also the easiest to configure, requiring only an Azure subscription, a storage account, and the storage account’s access key.
Why A (A File Share Witness) is Incorrect: A File Share Witness (FSW) is a valid witness type, but it requires an on-premises file share, which is typically hosted on another server (like a domain controller or a separate file server). The question explicitly states a desire to avoid using on-premises file shares.
Why B (A Disk Witness) is Incorrect: A Disk Witness is a small LUN (disk) presented from a shared storage array (like a SAN). This LUN is added to the cluster as a cluster resource. The question explicitly states a desire to avoid using “dedicated storage LUNs” for management.
Why D (A Node Majority Quorum – no witness) is Incorrect: A “Node Majority” quorum (which is the default for an odd number of nodes) is the wrong configuration for a cluster with an even number of nodes. With 6 nodes, a “Node Majority” quorum would mean the cluster needs 4 (majority of 6) nodes to run. If the cluster split exactly in half (3 nodes in one site, 3 in another, or a 3-node failure), neither side would have a majority, and the entire cluster would go offline. A witness is required for an even-node cluster to prevent this “split-brain” scenario and provide a tie-breaker.
Question 175. Your organization’s security policy requires that all Windows Server 2022 machines use application allow-listing. The policy must prevent all unauthorized executables, scripts, and drivers from running. The policy must be enforced by the hypervisor and be protected by virtualization-based security (VBS) so that a local administrator cannot tamper with or disable it. Which Windows Server security feature, combined with VBS, should you implement?
A) Windows Defender Application Control (WDAC) with Hypervisor-Protected Code Integrity (HVCI)
B) AppLocker with default rules
C) Credential Guard and UEFI Secure Boot
D) Just Enough Administration (JEA)
Correct Answer: A
Explanation:
The correct answer is A) Windows Defender Application Control (WDAC) is the application allow-listing technology, and Hypervisor-Protected Code Integrity (HVCI) is the specific VBS feature that protects it.
Why A (WDAC with HVCI) is Correct: This option combines the two exact technologies designed for this.
Windows Defender Application Control (WDAC): This is Microsoft’s most robust application allow-listing solution. It operates at a deep, kernel level. You create a “code integrity (CI) policy,” which is an XML file that explicitly defines what code is trusted to run (e.g., “all code signed by Microsoft” or “all code with this hash”).
Enforces on Drivers/Scripts: Unlike AppLocker, WDAC enforces this policy on all executable code, including user-mode applications, PowerShell scripts, and, critically, kernel-mode drivers. This meets the requirement to block all unauthorized code types.
Hypervisor-Protected Code Integrity (HVCI): This is the VBS component (also known as “Memory Integrity”). When HVCI is enabled, Windows uses the hypervisor to create a “Virtual Secure Mode” (VSM). It then moves the kernel-mode code integrity subsystem (the part of Windows that enforces the WDAC policy) into this isolated VSM.
Tamper-Proof: The result is that the WDAC policy and its enforcement engine are no longer in the main Windows kernel. They are in a protected memory space that the kernel cannot access. This means that even if an attacker gains full kernel-level (administrator) access, they cannot tamper with the policy, disable the service, or bypass it to load a malicious driver. This directly meets the “enforced by the hypervisor” and “local administrator cannot tamper with” requirements.
Why B (AppLocker with default rules) is Incorrect: AppLocker is an older, less secure application allow-listing technology. It has two critical flaws: it cannot be used to block kernel-mode drivers, and its policies are not protected by VBS. A local administrator can simply stop the AppLocker service (AppIDSvc) or modify the Group Policy to disable it, making it easy to bypass.
Why C (Credential Guard and UEFI Secure Boot) is Incorrect: This option is a plausible distractor because it uses the right “buzzwords.” Credential Guard also uses VBS, but its purpose is to protect the LSASS process to prevent credential theft (Pass-the-Hash). It has nothing to do with application allow-listing. UEFI Secure Boot is a prerequisite for VBS (both Credential Guard and HVCI), but it is not the allow-listing feature itself.
Why D (Just Enough Administration – JEA) is Incorrect: JEA is a delegated administration technology. It is used to give a non-administrator user limited, temporary rights to run specific PowerShell commands (e.g., Restart-Service). It does not control what executables or drivers can run on the system.
Question 176. You are deploying a Windows Server 2022 failover cluster that will span two physical data centers, “Site A” and “Site B”. The sites are connected by a 1 Gbps WAN link with an average round-trip latency of 8 milliseconds (ms). You need to replicate a 5 TB volume (which hosts a critical database) from the cluster in Site A to the cluster in Site B) The business requirement is a Recovery Point Objective (RPO) of zero, meaning absolutely no data loss can be tolerated in a site failure. Which Windows Server technology and replication mode must you use?
A) Hyper-V Replica in Asynchronous mode.
B) Storage Replica in Synchronous mode.
C) Storage Replica in Asynchronous mode.
D) Distributed File System Replication (DFS-R).
Correct Answer: B
Explanation:
The correct answer is B, Storage Replica in Synchronous mode. This is the only Windows-native technology listed that can provide a “zero data loss” (RPO=0) guarantee, and the 8ms latency is outside the recommended tolerance, but it is the only option that meets the RPO. (Note: In a real-world scenario, 8ms would be problematic, but for the exam, RPO=0 maps only to Synchronous).
Why B (Storage Replica in Synchronous mode) is Correct: This question tests the core concepts of RPO and replication technologies.
RPO of Zero: The requirement for a “Recovery Point Objective (RPO) of zero” is the most important constraint. This means that no data loss is permitted. When a write occurs on the primary site, that write must be successfully committed to the secondary site before the application (the database) receives the “write complete” acknowledgement.
Synchronous Replication: This process is called synchronous replication. Storage Replica is the Windows Server feature that provides block-level, volume-to-volume replication. It has two modes:
Synchronous (SR-S): The write I/O is sent to both the local and remote storage. The application’s I/O waits for an acknowledgement from both locations. This guarantees zero data loss but adds latency to every single write.
Asynchronous (SR-A): The write I/O is sent to the local storage. The application gets an immediate acknowledgement. The write is then sent to the remote storage in the background. This is much faster for the application, but there is a lag of seconds or minutes, meaning some data will be lost in a failure.
The Only Choice: To meet the RPO=0 requirement, Storage Replica in Synchronous mode is the only technology that fits.
The Latency Issue: Microsoft’s official recommendation for SR-S is a network latency of < 5ms round-trip. The 8ms in the scenario is higher than this, which would cause significant application performance degradation (every write would be delayed by 8ms). However, among the choices given, it is the only one that can technically provide an RPO of zero. The other options cannot provide an RPO of zero under any circumstances.
Why A (Hyper-V Replica in Asynchronous mode) is Incorrect: Hyper-V Replica is a VM-level replication technology. It is exclusively asynchronous. Its most frequent replication interval is 30 seconds. This means it has a minimum RPO of 30 seconds, which does not meet the RPO=0 requirement.
Why C (Storage Replica in Asynchronous mode) is Incorrect: As explained above, asynchronous replication by definition does not have an RPO of zero. It is designed for scenarios where some data loss is acceptable in exchange for better performance over long-distance, higher-latency links.
Why D (Distributed File System Replication – DFS-R) is Incorrect: DFS-R is a file-based, multi-master replication technology. It is asynchronous, with replication intervals often measured in minutes or hours. It is also not supported for replicating live, open database files (like SQL .mdf/.ldf files), as it cannot handle the file-locking.
Question 177. Your on-premises Active Directory Domain Services (AD DS) is synchronized with Azure AD using Azure AD Connect. Password Hash Synchronization is enabled. Your security team is concerned that user passwords are too simple and are being reused from other breached websites. You need to implement a solution that prevents users from changing their AD password to a known, compromised password (i.e., one that has appeared in a public data breach). You also want to block company-specific terms like “Contoso” and “Q4-2025”. Which Azure AD service should you deploy and configure?
A) Azure AD Privileged Identity Management (PIM)
B) Azure AD Identity Protection
C) Microsoft Defender for Identity
D) Azure AD Password Protection
Correct Answer: D
Explanation:
The correct answer is D, Azure AD Password Protection. This service is specifically designed to enforce strong password policies that go beyond simple “complexity” by checking against global and custom “banned” password lists.
Why D (Azure AD Password Protection) is Correct: Azure AD Password Protection is a feature that addresses the exact problem of weak and compromised passwords. It works in two ways:
Global Banned List: Microsoft maintains a dynamic, global list of millions of passwords that are known to be weak (e.g., “Password123”) or have appeared in public data breaches. When you enable this feature, Azure AD checks any new password (set in the cloud or on-premises) against this list and rejects it if it is a match. This directly meets the “known, compromised password” requirement.
Custom Banned List: The service also allows you to create your own custom “banned password” list. In this list, you would add company-specific terms like “Contoso,” “Q4-2025,” product names, or office locations. The service is smart and will also block variations (e.g., “C0nt0s0!”). This meets the second requirement.
On-Premises Protection: A key component of this solution is the Azure AD Password Protection proxy agent. You install this agent on your on-premises domain controllers. This agent downloads the banned password lists from Azure AD. When a user on-premises (e.g., pressing Ctrl+Alt+Del) tries to change their AD password, the domain controller’s password filter DLL consults the agent, which checks the new password against the lists. If it is a banned password, the DC rejects the change. This provides the same protection for your synchronized on-premises AD as you have in the cloud.
Why A (Azure AD Privileged Identity Management – PIM) is Incorrect: PIM is a service for managing and securing privileged administrator roles. Its features are “Just-in-Time” (JIT) role activation and access reviews. It does not manage or enforce password content for standard users.
Why B (Azure AD Identity Protection) is Incorrect: Azure AD Identity Protection is a risk detection engine. It consumes signals (including the “leaked credential” signal from the Password Protection service) to identify “risky users” and “risky sign-ins.” It is the platform that reports on the risk, but Azure AD Password Protection is the feature that prevents the password from being set in the first place.
Why C (Microsoft Defender for Identity) is Incorrect: This is a threat detection platform for on-premises Active Directory. It detects attacks like Pass-the-Hash. It does not prevent users from setting a specific password.
Question 178. You are managing a hybrid environment with on-premises servers running Windows Server 2012 R2 and newer. You need to use a single, modern, browser-based management tool to manage all your servers. This tool must be installed on-premises in a “gateway” configuration, and it must be able to manage servers that are not domain-joined (i.e., in a workgroup) by using local-administrator credentials. It should also provide an integrated web-based PowerShell console and be the primary platform for registering servers with Azure hybrid services like Azure Backup or Azure File SynC) Which management tool should you deploy?
A) System Center Virtual Machine Manager (SCVMM)
B) Remote Server Administration Tools (RSAT)
C) Windows Admin Center (WAC)
D) PowerShell Web Access
Correct Answer: C
Explanation:
The correct answer is C, Windows Admin Center (WAC). WAC is Microsoft’s modern, unified, browser-based management tool designed to manage Windows Server (and client) instances, whether they are on-premises, in Azure, domain-joined, or in a workgroup.
Why C (Windows Admin Center – WAC) is Correct: Windows Admin Center (WAC) meets every single requirement listed in the prompt:
Modern, Browser-Based: WAC is a web application that you access through a modern browser (like Edge or Chrome). It provides a rich, graphical interface for server management.
On-Premises Gateway: You install WAC on a designated management server (a “gateway server”). Administrators connect their browsers to this gateway, which then uses WinRM (PowerShell Remoting) to connect to and manage the target servers.
Manages Non-Domain-Joined Servers: WAC is not limited to domain-joined machines. When you add a new server connection, WAC explicitly asks for the credentials to use. You can easily provide local administrator credentials (e.g., .\Administrator and the password) to manage servers that are in a workgroup. (This requires WinRM configuration on the workgroup server, such as adding the WAC gateway to its TrustedHosts list).
Integrated PowerShell: WAC has a built-in “PowerShell” tool that gives you a full, web-based PowerShell console directly connected to the target server you are managing.
Azure Hybrid Services: WAC is the primary on-ramp for Azure hybrid services. Its “Azure hybrid center” and built-in extensions make it trivial to register a server with Azure Arc, Azure Backup, Azure File Sync, Azure Monitor, and others, with wizard-driven graphical interfaces.
Backwards-Compatibility: WAC supports managing Windows Server 2012 R2 and newer, as specified.
Why A (System Center Virtual Machine Manager – SCVMM) is Incorrect: SCVMM is a heavy, enterprise-scale management solution focused exclusively on managing the virtualization fabric (Hyper-V hosts, clusters, storage, and networking). It is not a general-purpose tool for managing file services, registry, or event logs on a server, and it is not a lightweight, browser-based tool.
Why B (Remote Server Administration Tools – RSAT) is Incorrect: RSAT is a collection of traditional MMC snap-ins (like Server Manager, Active Directory Users and Computers, etC)) that you install on a client PC or a server with Desktop Experience. It is not “browser-based.” It is a set of “thick client” applications.
Why D (PowerShell Web Access) is Incorrect: PowerShell Web Access is a Windows Server feature that provides only a web-based PowerShell console. It is not a “graphical” management tool. It does not provide the rich GUI for managing services, files, registry, updates, or Azure integrations that WAC does. It is just a command prompt in a browser.
Question 179. A new security auditor has joined your company and wants to know which technologies you are using to protect your Windows Server 2022 domain controllers from credential theft. You explain that you are using a virtualization-based security (VBS) feature that isolates the Local Security Authority Subsystem Service (LSASS) process in a protected, virtualized container. This prevents tools like Mimikatz from dumping NTLM hashes and Kerberos tickets from memory, even if the attacker gains full administrator privileges. Which security feature are you describing?
A) Windows Defender Application Control (WDAC)
B) BitLocker Drive Encryption
C) Credential Guard
D) Just Enough Administration (JEA)
Correct Answer: C
Explanation:
The correct answer is C, Credential Guard. This is the specific Windows Server security feature that uses virtualization-based security (VBS) to protect the LSASS process and mitigate credential theft attacks like Pass-the-Hash.
Why C (Credential Guard) is Correct: The question provides a perfect, textbook definition of Credential Guard.
Virtualization-Based Security (VBS): Credential Guard is a VBS-backed feature. It requires a hypervisor (like Hyper-V) and hardware support (UEFI, Secure Boot, TPM 2.0).
Isolates LSASS: VBS creates an isolated “Virtual Secure Mode” (VSM) that is separate from the normal Windows kernel. Credential Guard moves the component of the LSASS process that stores credentials (the “LsaIso” process) into this VSM.
Prevents Memory Dumping: A “proxy” process is left running in the normal OS, but it contains no secrets. When an attacker gains administrator privileges and runs a tool like Mimikatz to scrape the memory of the lsass.exe process, they find nothing. The actual credentials (NTLM hashes, Kerberos Ticket-Granting Tickets) are stored in the VSM, which is inaccessible from the main OS kernel, even with “Ring 0” (kernel-level) privileges.
Mitigates Pass-the-Hash: By preventing the attacker from dumping the hash, it directly mitigates “Pass-the-Hash” (PtH) and “Pass-the-Ticket” (PtT) attacks, as the attacker can no longer steal the credential material to impersonate the user on other machines.
This is the exact purpose and function of Credential Guard.
Why A (Windows Defender Application Control – WDAC) is Incorrect: WDAC is also a VBS-backed feature (it uses HVCI), but its purpose is application allow-listing. It controls what applications and drivers can run. It does not isolate LSASS or protect credentials in memory.
Why B (BitLocker Drive Encryption) is Incorrect: BitLocker is a data-at-rest encryption technology. It encrypts the hard drive to protect data if the server is physically stolen. It provides no protection for credentials in memory (in-use) on a running server.
Why D (Just Enough Administration – JEA) is Incorrect: JEA is a delegated administration technology. It uses PowerShell to create constrained, low-privilege endpoints to allow users to perform specific administrative tasks without giving them full administrator rights. It is a preventative measure to stop attackers from gaining administrator privileges in the first time, but it does not protect credentials if an attacker is already an administrator.
Question 180. You are designing a storage solution for a new 4-node Storage Spaces Direct (S2D) cluster. The cluster will host mixed workloads, including VDI user profiles and general-purpose file shares. Your primary goal is to achieve a balance between good write performance and high storage efficiency. You do not need the “absolute best” performance of a pure mirror, but you want to avoid the slow random writes of pure parity. Which S2D resiliency type is designed to provide this balance by writing data quickly to a mirror tier and then later moving it to a parity tier?
A) Three-way mirror
B) Mirror-accelerated parity
C) Nested resiliency
D) Dual parity
Correct Answer: B
Explanation:
The correct answer is B, Mirror-accelerated parity. This is a “hybrid” resiliency type designed specifically for the scenario described: balancing write performance with storage efficiency, making it ideal for mixed or general-purpose workloads.
Why B (Mirror-accelerated parity) is Correct: Mirror-accelerated parity (MAP) creates a single volume that is internally composed of two tiers:
Mirror Tier: A portion of the volume (e.g., 20%) is configured as a three-way or two-way mirror. This tier is optimized for performance. When new data is written to the volume (e.g., a VDI user logs in and their profile is loaded), it is written exclusively to this fast mirror tier. This provides excellent, low-latency write performance, as the write operation only needs to complete the fast mirror write.
Parity Tier: The remaining, larger portion of the volume (e.g., 80%) is configured as dual parity. This tier is optimized for capacity efficiency.
Automatic Data Rotation: Storage Spaces Direct runs a background task that automatically “rotates” datA) As data in the mirror tier becomes “cold” (infrequently accessed), it is moved from the mirror tier to the parity tier. This frees up space in the fast mirror tier for new, “hot” writes.
This “best of both worlds” approach is perfect for mixed workloads. It provides the “burst” write performance needed for VDI and active files, while also providing the storage efficiency of parity for the 80% of data that is cold or infrequently accessed. This achieves the balance that the question asks for.
Why A (Three-way mirror) is Incorrect: A three-way mirror provides the best performance, but it has the worst storage efficiency (33.3%). This is not a “balance”; it is optimizing purely for performance at the expense of capacity, which is not what the mixed-workload scenario requires.
Why C (Nested resiliency) is Incorrect: Nested resiliency is a specialized feature for two-node S2D clusters. It is not a standard resiliency type used in a 4-node cluster.
Why D (Dual parity) is Incorrect: A volume configured with pure dual parity would provide excellent storage efficiency (e.g., ~75-80% on a 4+ node cluster) but terrible random write performance. This is due to the “read-modify-write” penalty that all parity-based systems incur. This would be a very poor choice for a VDI workload and does not provide the “balance” or “good write performance” requested.