Click here to access our full set of Microsoft AZ-801 exam dumps and practice tests.
Question 141. You are the administrator for a geographically dispersed Windows Server 2022 failover cluster. The cluster, named “Cluster-Main,” spans two sites: “SiteA” (the primary site) and “SiteB” (the disaster recovery site). You are hosting a critical Hyper-V virtual machine role named “VM-SQL01.” You must configure the cluster so that “VM-SQL01” runs on a node in “SiteA” by default. If a failure causes it to failover to “SiteB,” you want the role to remain in “SiteB” even after the nodes in “SiteA” have recovered, to prevent an unnecessary service interruption from an automatic failback. How should you configure this role?
A) Configure the “Possible Owners” of the role to only include nodes in SiteA
B) Configure “SiteA” as the “Preferred Site” for the role and set the “Failback” option to “Prevent Failback.”
C) Configure the cluster’s “Preferred Site” to be SiteA and configure the “Cluster-Wide” failback window.
D) Set the “Priority” of the role to “High” and the “Priority” of SiteB nodes to “Low.”
Correct Answer: B
Explanation:
The correct answer is B, which addresses both the initial placement and the post-recovery behavior of the clustered role. This scenario requires leveraging the “site awareness” feature of modern Windows Server Failover Clustering.
Why B (Configure “Preferred Site” and “Prevent Failback”) is Correct: This option correctly uses two distinct but complementary cluster settings to achieve the desired outcome.
“Preferred Site” (SiteA): Failover clustering (starting in Windows Server 2016) can be made “site-aware.” You can define “Fault Domains” (at the site, rack, or chassis level) for your nodes. By defining “SiteA” and “SiteB” and then setting the “Preferred Site” for the “VM-SQL01” role to “SiteA,” you are instructing the cluster to always try to start or run this role on a node in “SiteA” if one is available. This satisfies the first requirement that the VM runs in “SiteA” by default.
“Failback” set to “Prevent Failback”: Failback is the process of automatically moving a clustered role back to its preferred owner or site once that node or site rejoins the cluster after a failure. By default, failback is often enabled (or configured to happen during a specific window). By setting the failback behavior for this specific role to “Prevent Failback,” you are creating the following logic:
The role runs on “SiteA” (its preferred site).
A disaster takes “SiteA” offline.
The cluster automatically fails over the role to “SiteB)”
Later, “SiteA” nodes recover and rejoin the cluster.
Because failback is prevented, the cluster does not automatically move the role back to “SiteA)” “VM-SQL01” continues to run in “SiteB” without interruption. This allows an administrator to manually and gracefully move the role back during a planned maintenance window, thus preventing the “unnecessary service interruption” mentioned in the prompt.
Why A (Configure “Possible Owners”…) is Incorrect: This is a destructive and incorrect approach. If you configure the “Possible Owners” to only include nodes in “SiteA,” you are explicitly forbidding the cluster from ever running the role on “SiteB” nodes. This completely breaks the disaster recovery plan. In the event of a “SiteA” failure, the role would fail and would not be ablea to start on “SiteB,” resulting in a complete outage until “SiteA” is recovereD)
Why C (Configure the cluster’s “Preferred Site” and “Cluster-Wide” failback) is Incorrect: This option is too general and potentially disruptive. Setting the “Preferred Site” at the cluster level (not the role level) would make “SiteA” the preferred site for all roles, which may not be desireD) More importantly, configuring the “Cluster-Wide” failback window (e.g., “failback between 1:00 AM and 5:00 AM”) would still cause an automatic failback, just at a different time. The requirement was to prevent the automatic failback entirely, requiring manual intervention. This option would still cause an automatic, albeit delayed, service interruption.
Why D (Set the “Priority” of the role and nodes) is Incorrect: This option describes settings that are either irrelevant or non-existent in this context. You set the “Priority” (“High,” “Medium,” “Low”) on the role to determine its startup order relative to other roles. You do not set a “Priority” on a node in this manner. You can set a node’s “DrainOnShutdown” or “Quarantine” status, but not a simple priority. These settings have no bearing on failover sites or failback behavior.
Question 142. You are tasked with monitoring a hybrid environment consisting of Azure VMs and on-premises Windows Server 2022 servers that have been onboarded as Azure Arc-enabled servers. You need to collect detailed performance counters (e.g., % Processor Time, LogicalDisk\% Free Space) and specific Windows Event Logs from all servers and send this data to a central Azure Log Analytics workspace for analysis and alerting. You want to use the most modern and efficient agent available that allows for granular data collection using Data Collection Rules (DCRs). Which agent should you deploy and configure?
A) The Azure Monitor Agent (AMA)
B) The Microsoft Monitoring Agent (MMA) / Log Analytics Agent
C) The Azure Arc Connected Machine Agent
D) The Azure Site Recovery Mobility Service
Correct Answer: A
Explanation:
The correct answer is A, the Azure Monitor Agent (AMA). This is Microsoft’s new, consolidated agent designed to replace older monitoring agents and provide a more flexible and efficient data collection mechanism.
Why A (The Azure Monitor Agent – AMA) is Correct: The Azure Monitor Agent (AMA) is the future of data collection for Azure Monitor. It is designed to replace the Log Analytics Agent (MMA), the Telegraf agent (for Linux), and the Diagnostics extension.
Modern and Consolidated: AMA is the single agent needed to collect logs and metrics from both Azure VMs and hybrid machines (via Azure Arc).
Data Collection Rules (DCRs): This is the key feature. The AMA is configured using Data Collection Rules (DCRs). DCRs are a new, centralized, and granular way to define what data to collect and where to send it. You can create a DCR that says, “Collect the ‘System’ event log (Warning and Error) and the ‘% Processor Time’ performance counter from this group of machines (both Azure VMs and Arc-enabled servers) and send it to Log Analytics Workspace A)” This is far more flexible than the “all-or-nothing” configuration of the older MMA)
Efficiency: AMA offers performance improvements and a smaller footprint compared to its predecessors. It also enables future enhancements like data filtering at the source to reduce ingestion costs.
Since the requirement is to use the most modern agent that uses DCRs, the Azure Monitor Agent is the only correct answer.
Why B (The Microsoft Monitoring Agent – MMA – / Log Analytics Agent) is Incorrect: This is the legacy agent. The Microsoft Monitoring Agent (MMA), also called the Log Analytics agent, was the standard for many years. However, it is now on a deprecation path, with its retirement announceD) It does not use Data Collection Rules (DCRs). Its configuration is managed within the Log Analytics workspace itself (in the “Agents configuration” section), which is a less granular, “per-workspace” configuration model. It is not the “most modern” solution.
Why C (The Azure Arc Connected Machine Agent) is Incorrect: The Azure Arc Connected Machine Agent is a prerequisite, but it is not the data collection agent. The “Arc agent” (azcmagent) is responsible for onboarding the on-premises server into Azure Resource Manager. It establishes the server’s identity as an Arc-enabled resource and manages its connection to Azure. One of its primary functions is to manage and install other extensions, such as the Azure Monitor Agent (AMA). So, you need the Arc agent first to be able to deploy the AMA, but the AMA is the component that actually collects the performance counters and logs.
Why D (The Azure Site Recovery Mobility Service) is Incorrect: The ASR Mobility Service is a specialized agent used for a completely different purpose: disaster recovery. It is installed on a source machine (on-premises or in Azure) to capture all disk write I/O in real-time and replicate that data to a recovery vault for failover. It has no function related to collecting performance counters or event logs for monitoring.
Question 143. You are the lead security administrator for your organization. You have been tasked with implementing an “application allow-listing” policy on a fleet of Windows Server 2022 Hyper-V hosts. The primary objective is to prevent any unsigned or unauthorized code, including drivers, from executing in kernel-mode, thereby protecting the hypervisor itself. The solution must be capable of being protected by virtualization-based security (VBS) so that even a user with full administrative privileges cannot tamper with or bypass the policy. Which technology should you implement?
A) Windows Defender Application Control (WDAC)
B) AppLocker
C) Just Enough Administration (JEA)
D) Credential Guard
Correct Answer: A
Explanation:
The correct answer is A, Windows Defender Application Control (WDAC). WDAC is Microsoft’s premier application control solution that provides the robust, kernel-level, and VBS-protected enforcement required by the scenario.
Why A (Windows Defender Application Control – WDAC) is Correct: WDAC, which evolved from “Device Guard,” is a strict application control technology built deep into the Windows operating system.
Kernel-Mode Protection: Unlike its predecessor AppLocker, WDAC policies can control all code that runs on the system, including kernel-mode drivers, scripts, and applications. The requirement to block “unauthorizeD).. drivers” is a key differentiator that points directly to WDAC, as AppLocker cannot do this.
Default-Deny: You create a “code integrity policy” (a .CIP file) that explicitly defines what is trusted to run (e.g., “all code signed by Microsoft” or “all code signed by our internal CA”). Everything not on this list is blocked by default. This provides the “allow-list” (or “default-deny”) posture.
VBS-Protected (HVCI): This is the most critical part of the requirement. WDAC policies can be protected by Hypervisor-Protected Code Integrity (HVCI), which is a component of virtualization-based security (VBS). When HVCI (also called “Memory Integrity”) is enabled, the WDAC policy and the kernel-mode code integrity engine are moved into an isolated, hypervisor-protected “virtual secure mode” (VSM). This means that even a compromised administrator or a kernel-level exploit cannot tamper with the policy to allow malicious code to run. This directly meets the “protected by VBS” and “cannot be tampered with” requirements.
Why B (AppLocker) is Incorrect: AppLocker is a useful but less secure application control technology. Its primary weakness is that it cannot be used to control kernel-mode drivers, which is a key requirement. An attacker could still load a malicious (but signed) driver to bypass AppLocker. Furthermore, AppLocker policies are not protected by VBS; a local administrator can easily stop the AppLocker service (AppIDSvc) or modify the GPO to disable the policies, making it easy to bypass for a privileged attacker.
Why C (Just Enough Administration – JEA) is Incorrect: JEA is a security feature for a different purpose. It provides a way to delegate administrative tasks with least privilege using constrained PowerShell endpoints. It defines what commands an administrator can run (e.g., “Restart-Service”), not what applications or drivers can execute on the operating system’s kernel.
Why D (Credential Guard) is Incorrect: Credential Guard is another feature that uses virtualization-based security (VBS), which makes it a plausible distractor. However, Credential Guard’s sole purpose is to protect the Local Security Authority Subsystem Service (LSASS) process to prevent credential theft (e.g., Pass-the-Hash). It isolates NTLM hashes and Kerberos tickets in VSM. It has absolutely no function related to application or driver allow-listing.
Question 144. Your company is retiring a Windows Server 2012 file server named “FS-Old” and replacing it with a new Windows Server 2022 server named “FS-New.” The legacy server hosts 15 TB of data across 50 SMB shares with complex NTFS permissions. You must migrate all data, shares, and permissions. A critical requirement is to complete the migration with minimal user impact. Specifically, you must be able to perform the final cutover during a short maintenance window, after which all client PCs and applications accessing the old server name (“\\FS-Old”) must be automatically redirected to the new server without any client-side reconfiguration. Which tool is explicitly designed for this end-to-end scenario?
A) Robocopy with the /MIR and /SEC switches
B) Distributed File System Replication (DFS-R)
C) Storage Migration Service (SMS)
D) Azure File Sync with Cloud Tiering
Correct Answer: C
Explanation:
The correct answer is C, the Storage Migration Service (SMS). This is a comprehensive, modern tool introduced in Windows Server 2019 and included in Windows Server 2022, built specifically to manage the entire lifecycle of a file server migration, including the critical identity cutover.
Why C (Storage Migration Service – SMS) is Correct: The Storage Migration Service, managed through Windows Admin Center, is an orchestrated, three-step solution that addresses every requirement in the prompt.
Inventory: The first phase involves the SMS orchestrator server scanning the source server (“FS-Old”). It inventories everything: all volumes, all files, all SMB share definitions (including their specific settings like access-based enumeration), and all NTFS and share permissions.
Transfer: The second phase is the bulk data copy. SMS uses a high-performance, multi-threaded transfer engine to move the data from “FS-Old” to “FS-New.” It fully preserves all NTFS permissions. This step is idempotent, meaning you can run it multiple times. The first run copies all data, and subsequent runs perform a delta-sync, copying only new or changed files. This is perfect for minimizing the final cutover window, as you can get the data 99.9% in sync before the maintenance window.
Cutover: This is the most crucial phase and the reason SMS is the correct answer. During the planned maintenance window, you initiate the cutover. SMS performs a final delta-sync, then securely takes over the identity of the source server. It does this by renaming “FS-Old” to a new random name, transferring the original name (“FS-Old”) and its IP address(es) to the new server (“FS-New”), and finally, applying all the 50 SMB share configurations to “FS-New.” The result is that when the cutover is complete (in minutes), “FS-New” is now “FS-Old” on the network. When users and applications try to connect to \\FS-Old, their requests seamlessly land on the new server. No client-side changes are needeD)
Why A (Robocopy…) is Incorrect: Robocopy is a file-copy utility, not a migration service. While Robocopy /MIR /SEC can effectively copy the files and NTFS permissions, it does absolutely nothing for the 50 share configurations (you would have to manually recreate them) and, most importantly, it has no mechanism to perform the identity cutover. You would be left with a massive manual task of reconfiguring all clients, or attempting a risky manual server rename, which is what SMS automates.
Why B (Distributed File System Replication – DFS-R) is Incorrect: DFS-R is a technology for replicating data between two or more active file servers. It is not a migration tool. It does not migrate share configurations and has no cutover mechanism. It is designed for ongoing synchronization, not a one-time migration and identity transfer.
Why D (Azure File SynC)..) is Incorrect: Azure File Sync is a hybrid solution for synchronizing an on-premises file server with an Azure File Share. Its purpose is to centralize data in the cloud while providing a local cache. It is not a tool for migrating from one on-premises server to another. You could use SMS to migrate into a server that is then enabled for Azure File Sync, but AFS itself does not perform the server-to-server migration.
Question 145. You are planning a disaster recovery strategy for your on-premises Hyper-V environment, which is managed by System Center Virtual Machine Manager (SCVMM). You intend to use Azure Site Recovery (ASR) to replicate your virtual machines to an Azure region. To orchestrate this replication, you must install the ASR provider on a specific server and register your SCVMM environment with an Azure Recovery Services vault. On which server must the Azure Site Recovery provider be installed?
A) On each individual Hyper-V host in the cluster.
B) On the Azure Recovery Services vault, as a cloud service.
C) On the SCVMM (System Center Virtual Machine Manager) server.
D) On each guest virtual machine that needs to be replicateD)
Correct Answer: C
Explanation:
The correct answer is C) When protecting a Hyper-V environment that is managed by SCVMM, the Azure Site Recovery provider acts as the communication broker, and it must be installed on the SCVMM server itself.
Why C (On the SCVMM server) is Correct: Azure Site Recovery offers different deployment models. When you have an environment managed by SCVMM, ASR leverages SCVMM’s capabilities to orchestrate and manage the replication.
Central Orchestration Point: The SCVMM server is the central management point for all the Hyper-V hosts and VMs in its “fabriC)” It understands which VMs are on which hosts, their network configurations, and their storage.
ASR Provider Role: By installing the Azure Site Recovery provider directly on the SCVMM server, you are extending SCVMM’s capabilities. The provider communicates up to the Azure Recovery Services vault (to send metadata and receive replication policies) and down to the SCVMM database (to discover VMs and orchestrate actions).
Replication Process: When you enable replication for a VM, the ASR provider on the SCVMM server instructs the Hyper-V host (where the VM is running) to begin replicating. The actual data flow is from the Hyper-V host directly to Azure (or to the replica), but the orchestration and management of that process is handled via the SCVMM server.
Registration: After installing the provider, you “register” the SCVMM server with the vault. This establishes the secure communication channel and makes the SCVMM fabric (its clouds, hosts, and VMs) visible in the Azure portal for protection.
Why A (On each individual Hyper-V host) is Incorrect: This is the process you would follow if you were protecting Hyper-V VMs without SCVMM. In a “Hyper-V Site” (non-SCVMM) scenario, you install the ASR provider (and the Recovery Services agent) directly on each host. But since the prompt specifically states the environment is “managed by System Center Virtual Machine Manager,” this method is incorrect and would not work.
Why B (On the Azure Recovery Services vault) is Incorrect: The ASR provider is an on-premises software component that you must download and install. The Azure Recovery Services vault is a cloud-based PaaS (Platform-as-a-Service) resource. You do not install software on the vault; you register your on-premises components with the vault.
Why D (On each guest virtual machine) is Incorrect: This describes the process for replicating VMware virtual machines or physical servers. In those scenarios, you must install the “ASR Mobility Service” inside the guest operating system of each machine to capture I/O. For Hyper-V VMs (both SCVMM-managed and non-SCVMM), this is not requireD) Hyper-V replication is host-based and agentless (from the guest VM’s perspective), which is a significant advantage.
Question 146. You are designing the storage for a new 4-node Storage Spaces Direct (S2D) cluster running Windows Server 2022. The servers are each populated with NVMe drives for caching and high-capacity SSDs for storage. The primary workload is a high-performance database application that requires the absolute best I/O performance (both reads and writes) and can tolerate a storage efficiency of 33.3%. Which S2D resiliency type should you configure for the volumes hosting the database files?
A) Three-way mirror
B) Mirror-accelerated parity
C) Nested two-way mirror
D) Dual parity
Correct Answer: A
Explanation:
The correct answer is A, three-way mirror. For high-performance workloads in a 4-node cluster where performance is the priority over storage efficiency, a three-way mirror is the standard and best-performing resiliency choice.
Why A (Three-way mirror) is Correct: In a Storage Spaces Direct environment, a three-way mirror means that for every piece of data written, three copies of that data are created and stored on separate physical disks on different nodes.
Performance: This resiliency type offers the best performance, especially for writes. A write operation is considered “complete” as soon as two of the three copies are written (it writes the third in the background). Because the data exists in multiple locations, random read operations can also be satisfied from the copy that is “closest” or least busy, which improves read performance.
Fault Tolerance: A three-way mirror in a 4+ node cluster can tolerate the failure of two nodes simultaneously, or a single node and a single disk in another node, providing a high level of availability.
Efficiency: The drawback, which the prompt states is acceptable, is poor storage efficiency. To store 1 TB of data, you need 3 TB of physical storage. This results in 33.3% efficiency, which perfectly matches the scenario’s constraint (“can tolerate a storage efficiency of 33.3%”).
Given the requirements for “absolute best I/O performance” and the specific tolerance for 33.3% efficiency, the three-way mirror is the textbook answer.
Why B (Mirror-accelerated parity) is Incorrect: Mirror-accelerated parity is a hybrid resiliency type. It creates a volume where a portion of the volume is a mirror (for high-performance writes) and the rest is parity (for high-capacity efficiency). Data is written quickly to the mirror part and then “rotated” down to the parity part over time. While this offers a balance of performance and efficiency, it does not provide the “absolute best I/O performance” for the entire volume, as the parity-tier data will be slower to reaD) Its efficiency would also be much higher than 33.3%.
Why C (Nested two-way mirror) is Incorrect: Nested resiliency is a feature designed for two-node S2D clusters. It provides a way to tolerate multiple failures even with only two nodes. It is not a standard configuration for a four-node cluster and is significantly more complex and less performant than a standard three-way mirror.
Why D (Dual parity) is Incorrect: Dual parity (similar to RAID-6) writes data in “stripes” with two parity blocks. This provides excellent storage efficiency (e.g., ~66-80% efficient), but it comes at a massive performance cost for random writes. Parity calculations are computationally intensive, and all “write” operations become “read-modify-write” operations, which are much slower than the simple writes of a mirror. This is the opposite of what you would choose for a high-performance database.
Question 147. You are the administrator for a company with a central headquarters and 15 branch offices. The headquarters has a large file server. You want to consolidate all file data into a single Azure File Share to simplify backups. However, the branch offices have slow and unreliable internet connections, and users need low-latency access to frequently used files. You decide to implement Azure File SynC) You deploy a Windows Server in each branch office to act as a local cache. To minimize local disk usage on the branch office servers, how should you configure the “Cloud Tiering” policy?
A) Enable the “Volume Free Space Policy” and set it to a high percentage (e.g., 80%).
B) Disable Cloud Tiering entirely to ensure all files are cached locally.
C) Enable the “Date Policy” and configure it to tier files not accessed in the last 3 days.
D) Enable the “Volume Free Space Policy” and set it to a low percentage (e.g., 20%).
Correct Answer: A
Explanation:
The correct answer is A. The “Volume Free Space Policy” is the primary mechanism for managing the local cache size, and setting a high percentage forces the agent to be more aggressive about tiering files to keep that amount of space free.
Why A (Enable “Volume Free Space Policy” and set to high percentage) is Correct: This can be a confusing concept, but the “Volume Free Space Policy” defines the goal for the amount of free space the Azure File Sync agent should try to maintain on the volume. Let’s break down the logic:
You have a 1 TB volume on the branch office server.
The “Volume Free Space Policy” is set to 80%.
This means the Azure File Sync agent’s goal is to keep 800 GB of the volume free at all times.
This, in turn, means the agent will only allow the local cache of files (the “hot” data) to consume 20% (or 200 GB) of the volume.
As users access files, the cache fills up. As soon as the cache grows and the free space drops below 80%, the agent’s cloud tiering “heat-store” process will kick in. It will aggressively identify the “coldest” (least recently used) files, “tier” them (leave the metadata on disk but purge the data content), and continue doing so until the 80% free space goal is met again.
This directly achieves the goal of “minimizing local disk usage” by maintaining a very small local cache and a large amount of free space.
Why B (Disable Cloud Tiering) is Incorrect: This is the exact opposite of the requirement. If you disable cloud tiering, the branch office server will attempt to download and store a full copy of the entire Azure File Share. This would maximize local disk usage and is not a “cache” but a full replica, which would overwhelm the local disk.
Why C (Enable the “Date Policy”) is Incorrect: The “Date Policy” is a secondary setting that complements the Volume Free Space Policy. It defines the minimum age of a file before it can be tiered (e.g., “do not tier files that have been accessed in the last 3 days”). This is a “proactive” tiering policy. However, the primary driver for cloud tiering is the reactive “Volume Free Space Policy,” which is the “set-it-and-forget-it” way to manage the cache size. The scenario is about managing the total disk usage, which is the Volume Free Space Policy’s joB)
Why D (Enable “Volume Free Space Policy” and set to low percentage) is Incorrect: This is a common mistake. If you set the policy to 20%, you are telling the agent, “I am happy as long as you keep at least 20% of the volume free.” This means the agent will allow the local cache to grow and consume up to 80% of the disk. This maximizes the local cache size and does not minimize local disk usage.
Question 148 .Your organization’s security team is mandated to mitigate credential theft attacks, specifically Pass-the-Hash (PtH) and Pass-the-Ticket (PtT). You are hardening your Windows Server 2022 domain controllers and other high-value servers. You plan to implement Credential GuarD) To successfully enable this feature, which hardware and software components are required on the server?
A) A TPM 2.0 chip, UEFI with Secure Boot, and the Hyper-V role installeD
B) Windows Defender Application Control (WDAC) and AppLocker.
C) Azure Active Directory (Azure AD) and Microsoft Defender for Identity.
D) BitLocker Drive Encryption and the Windows Server Backup feature.
Correct Answer: A
Explanation:
The correct answer is A) Credential Guard is a hardware-backed security feature that relies on virtualization-based security (VBS), which has a specific set of hardware and software prerequisites.
Why A (TPM 2.0, UEFI with Secure Boot, and Hyper-V role) is Correct: Credential Guard leverages Virtualization-Based Security (VBS) to run a critical security process in an isolated “virtual secure mode” (VSM) that is inaccessible to the main operating system kernel. To create and protect this VSM, several components are mandatory:
Hyper-V Role: VBS itself is a specialized instance of the Hyper-V hypervisor. The hypervisor is what creates and enforces the isolation boundary between the normal OS and the VSM. Therefore, the Hyper-V role (or the underlying hypervisor platform) must be installed, even if you do not plan to run any guest virtual machines.
UEFI with Secure Boot: Secure Boot is a feature of the UEFI firmware. It ensures that the machine boots only using trusted software (e.g., a Microsoft-signed bootloader). This is critical to protect the VBS environment from boot-time rootkits that might try to load before Credential Guard and compromise it.
A TPM 2.0 Chip (Trusted Platform Module): The TPM is a hardware cryptoprocessor. It is used to securely store and protect the encryption keys that VBS uses to “seal” its datA) Using a TPM ensures that even if an attacker steals the server’s hard drive, they cannot extract the VBS keys and decrypt the credentials. It also provides attestation that the boot process was secure.
Without all of these components, VBS cannot be securely initialized, and Credential Guard cannot be enableD)
Why B (WDAC and AppLocker) is Incorrect: Windows Defender Application Control (WDAC) and AppLocker are application control technologies. WDAC also uses VBS (in the form of HVCI) to protect its own policies, but these are not prerequisites for Credential GuarD) They are separate, complementary security features. You can run Credential Guard without WDAC, and vice-versa (though they are better together).
Why C (Azure AD and Microsoft Defender for Identity) is Incorrect: These are hybrid and cloud-based security services. Azure AD is a cloud identity provider. Microsoft Defender for Identity is a monitoring solution that detects attacks like Pass-the-Hash by analyzing authentication logs from your on-premises domain controllers. It reports on PtH; it does not prevent it on the local machine. Credential Guard is the on-premises prevention feature.
Why D (BitLocker and Windows Server Backup) is Incorrect: BitLocker is a data-at-rest encryption technology that encrypts the hard drive. While it can (and should) use the TPM, and is often enabled alongside Credential Guard, it is not a strict prerequisite for Credential Guard itself. Windows Server Backup is a feature for backing up and restoring data and has no relationship to VBS or credential isolation.
Question 149. You are managing a 12-node Hyper-V failover cluster running Windows Server 2022. You want to automate the monthly patching of all cluster nodes using Cluster-Aware Updating (CAU). The cluster hosts critical, 24/7 workloads, so the patching process must be fully automated and initiated by the cluster itself on a predefined schedule (e.g., “the 3rd Sunday of every month at 2:00 AM”) without any external administrator intervention. Which CAU operating mode must you configure?
A) Remote-updating mode
B) Self-updating mode
C) Patch-orchestrator mode
D) Asynchronous-updating mode
Correct Answer: B
Explanation:
The correct answer is B, self-updating mode. Cluster-Aware Updating (CAU) has two distinct operating modes, and “self-updating” is the one designed for fully autonomous, schedule-based patching.
Why B (Self-updating mode) is Correct: Self-updating mode is the “set-it-and-forget-it” solution for automated cluster patching.
CAU Clustered Role: When you configure self-updating mode, the CAU wizard adds a new clustered role to the failover cluster. This role, named “Cluster-Aware Updating,” is responsible for acting as the “Update Orchestrator.”
Autonomous Operation: This clustered role runs on one of the cluster nodes (and will failover like any other role). It contains the schedule and the configuration for the patching run.
Scheduled Execution: At the specified time (e.g., “3rd Sunday at 2:00 AM”), the CAU clustered role “wakes up” and begins the “Updating Run.”
Orchestration: It then proceeds to patch every other node in the cluster, one at a time. The process is: place a node in maintenance mode (draining all VMs via Live Migration), instruct the node to install updates (from WSUS or Windows Update), reboot the node, verify it is healthy, and then move to the next node.
Patches Itself Last: Finally, once all other nodes are patched, the CAU role will failover to one of the already patched nodes. It will then repeat the process on the node it originally ran on.
This model is fully self-contained and “initiated by the cluster itself,” which perfectly matches the scenario’s requirement for automation without external intervention.
Why A (Remote-updating mode) is Incorrect: Remote-updating mode is the manual or externally-orchestrated methoD) In this mode, an administrator must manually launch the CAU tool from a remote management computer (like a Windows 11 PC or another server). The administrator then clicks the “Apply updates to this cluster” button to start the “Updating Run” on-demanD) The orchestration logic runs on the remote computer, not on the cluster itself. While you could schedule a task on a remote server to run the PowerShell cmdlets, this is not “initiated by the cluster itself.”
Why C (Patch-orchestrator mode) is Incorrect: “Patch-orchestrator mode” is not a valid CAU operating mode. The CAU clustered role in self-updating mode is the orchestrator, but the mode itself is not called this. This is a plausible-sounding but incorrect technical term.
Why D (Asynchronous-updating mode) is Incorrect: “Asynchronous-updating mode” is not a real term for CAU. The CAU process is, by its nature, synchronous and sequential—it patches one node, waits for it to finish, and then moves to the next. It does not patch all nodes asynchronously (at the same time) as that would bring down the entire cluster.
Question 150. Your company has a hybrid infrastructure. You use Azure Automation Update Management to patch your Azure VMs and your on-premises Windows Server 2019 servers. The on-premises servers are all connected via Azure Arc-enabled servers. You have approved a new critical update for deployment. An administrator reports that the on-premises servers are not receiving the update, but the Azure VMs are. You have verified the on-premises servers are online and communicating with Azure ArC) What is the most likely component that is misconfigured or missing on the on-premises servers?
A) The Azure Arc Connected Machine Agent
B) The Azure Monitor Agent (AMA)
C) The Hybrid Runbook Worker
D) The Azure Site Recovery Mobility Service
Correct Answer: C
Explanation:
The correct answer is C, the Hybrid Runbook Worker. Azure Automation Update Management relies on this component to execute update tasks on non-Azure machines.
Why C (The Hybrid Runbook Worker) is Correct: Azure Automation Update Management is a complex solution with several parts. Here is how it works for hybrid machines:
Azure Automation Account: This is the “brain” in Azure that holds the update schedules, approved updates (via a linked WSUS or Windows Update), and the orchestration logiC)
Azure Arc-enabled servers: The server is onboarded with the Arc agent, making it visible to Azure.
Log Analytics Workspace: The server reports its “update status” to a Log Analytics workspace.
Hybrid Runbook Worker (HRW): This is the execution component. The Azure Automation service cannot directly reach into your on-premises network to install a patch. Instead, you designate one or more on-premises servers as a “Hybrid Runbook Worker.” When an update deployment starts, the Automation Account sends a “job” to the HRW. The HRW (which is on-premises) then executes the runbook, which contacts the target servers (also on-premises) to scan for, download, and install the patches.
If the Azure VMs are working, it means the Automation Account and schedule are correct. If the on-premises Arc-enabled servers are online but not getting the update, it strongly implies that the execution part of the workflow is broken. The most likely cause is that the Hybrid Runbook Worker role is not installed, is offline, or is not correctly configured in the Automation Account to manage those servers.
Why A (The Azure Arc Connected Machine Agent) is Incorrect: The prompt states that “You have verified the on-premises servers are online and communicating with Azure ArC)” This means the Connected Machine Agent is working correctly. Its job is to provide the “connection” to Azure, but it does not execute the update runbooks itself.
Why B (The Azure Monitor Agent – AMA) is Incorrect: The AMA (or its predecessor, the MMA/Log Analytics agent) is responsible for reporting on update compliance and sending data to the Log Analytics workspace. While it is a required component for Update Management, a failure here would typically manifest as the server “not reporting” its status. The failure to receive or install the update (the “push” action) is the responsibility of the Hybrid Runbook Worker.
Why D (The Azure Site Recovery Mobility Service) is Incorrect: This agent is completely unrelateD) It is used for disaster recovery (Azure Site Recovery) to replicate disk I/O to Azure. It has no role in the Azure Automation Update Management patching process.
Question 151. A file server running Windows Server 2022 is experiencing intermittent slowdowns. Your monitoring team wants to move from reactive to proactive management. You are asked to implement a built-in Windows Server feature that uses local machine learning models to analyze performance counters and system events. The goal is to forecast future resource consumption and predict when the server is likely to run out of storage or CPU capacity, generating alerts before the bottleneck occurs. Which feature, managed via Windows Admin Center or PowerShell, provides this capability?
A) Performance Monitor Data Collector Sets
B) System Insights
C) Azure Arc with VM Insights
D) Windows Defender Application Control (WDAC)
Correct Answer: B
Explanation:
The correct answer is B, System Insights. This is a feature introduced in Windows Server 2019 that is designed specifically for local, predictive analytics.
Why B (System Insights) is Correct: System Insights is a built-in feature that brings local predictive capabilities to Windows Server.
Local Analysis: It runs entirely on the server itself. It does not require any cloud connectivity, although it can be easily managed and visualized through Windows Admin Center.
Machine Learning: It uses a set of built-in machine learning models to analyze historical system data, such as performance counters (CPU, memory, storage, networking) and event logs.
Forecasting / Prediction: Based on this analysis, it generates forecasts about future resource usage. By default, it includes capabilities to predict:
CPU capacity: Forecasting when CPU usage will reach a sustained high.
Network capacity: Forecasting usage for network adapters.
Storage consumption: Forecasting when a volume will run out of free space.
Proactive Alerts: When a prediction indicates an impending issue (e.g., “Volume C: is forecast to be full in 20 days”), it generates a specific event in the Event Log. This event can then be used to trigger alerts or automated responses, allowing administrators to “get ahead” of the problem. This directly matches the “proactive” and “predictive” requirements of the prompt.
Why A (Performance Monitor Data Collector Sets) is Incorrect: Data Collector Sets are a feature of Performance Monitor used to collect and log performance data over time. This is the raw data that a tool like System Insights would analyze. However, Performance Monitor itself has no built-in machine learning or predictive forecasting engine. It is a data-gathering and real-time-display tool, not a predictive one.
Why C (Azure Arc with VM Insights) is Incorrect: Azure Monitor VM Insights (which you would use on an Arc-enabled server) is a cloud-based monitoring solution. It collects logs and metrics and sends them to a Log Analytics workspace, where Azure’s powerful analytics and ML engines can analyze the datA) While this is a very powerful solution, the question asks for a built-in Windows Server feature that uses local machine learning models. System Insights is the on-premises, built-in feature for this.
Why D (Windows Defender Application Control – WDAC) is Incorrect: WDAC is a security feature. It is an application allow-listing technology used to control which applications and drivers are allowed to run. It has absolutely no function related to performance monitoring or predictive analytics.
Question 152. The helpdesk team at your company needs the ability to perform a single, specific task on your Windows Server 2019 domain controllers: check the status of and restart the “Print Spooler” service. For security reasons, you cannot grant them “Domain Admin” or “Local Administrator” rights, and you must not allow them to use Remote Desktop (RDP). You need a solution that provides them with a highly constrained, auditable, command-line interface that only allows them to run Get-Service and Restart-Service with the -Name Spooler parameter. Which Windows Server security feature is designed for this exact purpose?
A) Just Enough Administration (JEA)
B) Role-Based Access Control (RBAC) in Windows Admin Center
C) Dynamic Access Control (DAC)
D) Credential Guard
Correct Answer: A
Explanation:
The correct answer is A, Just Enough Administration (JEA). JEA is a PowerShell-based security technology specifically created to enable delegated administration for specific tasks, adhering to the principle of least privilege, which perfectly matches the scenario.
Why A (Just Enough Administration – JEA) is Correct: JEA is a feature built into PowerShell that allows you to create constrained endpoints. When a user connects to a JEA endpoint (e.g., via Enter-PSSession -ComputerName DC01 -ConfigurationName Helpdesk-Printing), their session is severely restricted in several ways that meet the prompt’s requirements:
Reduced Privilege: The user’s session runs as a temporary, virtual, non-administrator account. They do not use their own high-privilege credentials, and they are not local admins on the box. This directly meets the “must not be granted full Local Administrator rights” requirement.
Limited Commands: You define what the user can do using a Role Capability File (.psrc). In this file, you explicitly whitelist the exact cmdlets, functions, and external commands the user is allowed to run. For this scenario, you would only allow Get-Service and Restart-Service.
Parameter-level Constraint: JEA is so granular that you can even constrain the parameters. You can create a configuration that only allows Restart-Service to be run when the -Name parameter is exactly “Spooler.” Any attempt to restart another service (e.g., Restart-Service -Name KDC) would fail.
No RDP/GUI: JEA is accessed purely through PowerShell remoting. This meets the “command-line” requirement and prevents RDP access.
Auditable: All commands run within a JEA session are automatically logged in detailed PowerShell transcripts and event logs, providing a clear audit trail.
Why B (Role-Based Access Control – RBAC – in Windows Admin Center) is Incorrect: Windows Admin Center (WAC) has its own RBAC model that can limit what users see and do within the WAC web interface. While you could create a role, this is a GUI-based solution. The prompt specifies a “command-line interface” and explicitly prohibits RDP, implying a programmatic or shell-based solution. JEA is the underlying, native PowerShell technology for this.
Why C (Dynamic Access Control – DAC) is Incorrect: Dynamic Access Control (DAC) is a technology framework focused on data governance and file access, not administrative tasks. DAC allows you to classify files (e.g., “Confidential”) and write access policies based on user claims (e.g., “User’s department = Finance”). It is used to control who can access data on a file server, not who can administer services on a domain controller.
Why D (Credential Guard) is Incorrect: Credential Guard is a hardening feature. It uses VBS to protect the LSASS process and prevent credential theft. It is a defensive technology that protects the server from attackers; it is not a delegation framework that grants limited permissions to administrators.
Question 153. You are designing a disaster recovery solution between your primary data center and a secondary data center. The two sites are connected by a 1 Gbps WAN link with an average round-trip network latency of 25 milliseconds (ms). You need to replicate a 2 TB volume from a SQL Server in the primary site to a server in the DR site. The business requires a Recovery Point Objective (RPO) of zero, meaning no data loss is acceptable in the event of a primary site failure. Which Windows Server technology and replication mode should you attempt to use, and what is the most likely outcome?
A) Storage Replica in Asynchronous mode. This will work but will not provide an RPO of zero.
B) Storage Replica in Synchronous mode. This will fail because the network latency is too high.
C) Hyper-V ReplicA) This will work, but the minimum RPO is 30 seconds.
D) Distributed File System Replication (DFS-R). This will work but is not suitable for SQL database files.
Correct Answer: B
Explanation:
The correct answer is B) Storage Replica’s synchronous mode is the only option that provides an RPO of zero, but it has strict network requirements that the 25ms link violates, making it fail or perform catastrophically.
Why B (Storage Replica in Synchronous mode…) is Correct: This option correctly identifies the technology, the mode, and the problem.
Technology: Storage Replica is a block-level, volume-to-volume replication technology in Windows Server, designed for disaster recovery and stretch clusters. It is the only Windows-native feature (besides S2D Stretch Clusters) that can offer an RPO of zero.
Mode (Synchronous): Synchronous replication is required for an RPO of zero. When the SQL Server writes data to the 2 TB volume, Storage Replica intercepts that write. It sends the same write I/O to the DR site and writes it to the local disk. The application (SQL Server) does not receive the “write complete” acknowledgement until both the local write and the remote write have completeD) This guarantees the data is in both places.
The Problem (Latency): The cost of this zero-data-loss guarantee is that the application’s write performance is now bound by the speed of light—specifically, the round-trip network latency. The application must wait for the 25ms round trip for every single write I/O. Microsoft’s official guidance for Storage Replica synchronous replication is a strict requirement of < 5 milliseconds (ms) round-trip latency. The 25ms latency in the prompt is five times the maximum tolerance. Attempting to use synchronous replication on this link would result in an unusable, catastrophically slow SQL Server, and the replication link itself would likely fail or constantly fall out of synC)
Why A (Storage Replica in Asynchronous mode…) is Incorrect: This is a very plausible alternative. Storage Replica can run in Asynchronous mode. In this mode, the application writes to the local disk, gets an immediate acknowledgement, and then Storage Replica sends the data to the DR site “in the backgrounD)” This mode would work over a 25ms link, but as the option itself states, it does not provide an RPO of zero. There would be a data lag of seconds or minutes. Therefore, it does not meet the business requirement for RPO=0.
Why C (Hyper-V ReplicA)..) is Incorrect: Hyper-V Replica is a VM-level, asynchronous replication technology. Its most frequent replication interval is 30 seconds. This means it has a minimum RPO of 30 seconds, which does not meet the RPO=0 requirement.
Why D (Distributed File System Replication – DFS-R) is Incorrect: DFS-R is a file-based, multi-master replication technology. It is explicitly not supported for replicating live database files (like SQL’s .mdf and .ldf files) because these files are constantly locked and open, which DFS-R cannot handle. Furthermore, it is asynchronous and provides no RPO guarantees.
Question 154. An older, business-critical application runs on a physical server with Windows Server 2012 R2. The hardware is failing, and you must migrate the server to a virtual machine on your new Windows Server 2022 Hyper-V cluster. The Storage Migration Service does not support this source OS for migration to a VM (it’s for file server migration). You need to perform a Physical-to-Virtual (P2V) conversion. What is the most common and effective Microsoft-provided utility to create a virtual hard disk (VHDX) from the live, running physical server with minimal downtime?
A) The Microsoft Virtual Machine Converter (MVMC)
B) Disk2vhd
C) Windows Server Backup (performing a bare-metal backup)
D) Azure Site Recovery (ASR)
Correct Answer: B
Explanation:
The correct answer is B, Disk2vhD) This is a simple, lightweight, and effective utility from the official Windows Sysinternals toolkit designed for this exact P2V (Physical-to-Virtual) scenario.
Why B (Disk2vhd) is Correct: Disk2vhd is the de facto tool for simple P2V conversions in the modern Windows ecosystem.
Sysinternals Utility: It is a free, Microsoft-provided (part of Sysinternals) tool.
Live Conversion: Its key feature is that it can be run on the live, running physical server. It does not require taking the server offline.
VSS Integration: It uses Windows’ Volume Shadow Copy Service (VSS) to take a consistent, point-in-time snapshot of the volumes you select (e.g., the C: drive and D: drive).
VHDX Output: It then reads from these snapshots and creates a new, bootable VHDX (or VHD) file.
Migration Process: The process is simple: run Disk2vhd on the physical server, save the VHDX file (e.g., to a network share), and then create a new Generation 1 (for Server 2012 R2) or Generation 2 VM in Hyper-V, and attach this existing VHDX as its hard drive. After booting the VM, you would remove the old hardware drivers and install Hyper-V Integration Services. This achieves the P2V migration with minimal downtime (only the time to copy the VHDX and boot the new VM).
Why A (The Microsoft Virtual Machine Converter – MVMC) is Incorrect: The Microsoft Virtual Machine Converter (MVMC) was the official, complex tool for P2V conversions. However, it was deprecated and retired by Microsoft in 2017. It is no longer supported, not available for download, and should not be useD) Disk2vhd is the recommended utility for this task now.
Why C (Windows Server Backup…) is Incorrect: Windows Server Backup creates backups intended for restore, typically to similar hardware (Bare-Metal Restore – BMR). While it is technically possible in some scenarios to restore a BMR backup into a running VM, it is not a supported, reliable, or intended P2V conversion methoD) It often fails due to hardware abstraction layer (HAL) and driver differences between the physical and virtual hardware.
Why D (Azure Site Recovery – ASR) is Incorrect: Azure Site Recovery (ASR) can be used for P2V conversions, but its primary purpose is disaster recovery and migration to Azure. You can configure ASR to replicate a physical server, and then “failover” to an Azure VM (a P2V migration to the cloud). While ASR can also be used to migrate to an on-premises VMM environment, it is a very complex and heavy solution (requiring a Configuration Server, Process Server, and a Recovery Services vault) for what Disk2vhd can do with a single executable. It is not the “most common” or “effective” utility for this simple P2V.
Question 155. You are a security administrator monitoring your on-premises Active Directory Domain Services (AD DS). You are concerned about lateral movement and credential-theft attacks. You want a cloud-based security solution that uses sensors installed on your domain controllers to monitor AD authentication traffic, learn the “normal” behavior of users and devices, and then use behavioral analytics and machine learning to detect anomalies, such as Pass-the-Hash attempts, unusual resource access, or reconnaissance activities. Which Microsoft security service is designed for this?
A) Microsoft Defender for Identity
B) Azure AD Identity Protection
C) Microsoft Defender for Cloud
D) Credential Guard
Correct Answer: A
Explanation:
The correct answer is A, Microsoft Defender for Identity. This is a cloud-based security solution specifically designed to protect on-premises Active Directory instances from advanced threats.
Why A (Microsoft Defender for Identity) is Correct: Microsoft Defender for Identity (formerly Azure Advanced Threat Protection or Azure ATP) is a User and Entity Behavior Analytics (UEBA) solution for on-premises Active Directory. Its entire architecture matches the scenario described:
Cloud-Based: It is a cloud service, part of the Microsoft 365 Defender portal.
Sensors on Domain Controllers: You install a “Defender for Identity sensor” on all of your on-premises domain controllers.
Monitors Traffic: This sensor monitors the network traffic coming to the DC (e.g., Kerberos, NTLM, DNS, RPC traffic) and parses the Windows Event Logs related to authentication.
Behavioral Analytics (UEBA): This data is sent to the Defender for Identity cloud service, which builds a “normal” baseline of behavior for every user and device in the organization (e.g., “User ‘Bob’ normally logs into ‘PC-05’ and ‘Server-Finance’ between 9 AM and 5 PM”).
Detects Anomalies: When it detects a deviation from this baseline or a known attack pattern, it generates an alert. This includes detecting “Pass-the-Hash” (when a user’s hash is seen authenticating from a machine they don’t normally use), “Pass-the-Ticket,” “Golden Ticket,” reconnaissance (e.g., an “admin” account suddenly scanning the network), and other lateral movement techniques.
Why B (Azure AD Identity Protection) is Incorrect: Azure AD Identity Protection is a similar UEBA and risk-detection engine, but it is for cloud-native identities in Azure Active Directory. It detects risks like “impossible travel,” “logon from anonymous IP address,” or “leaked credentials” for AAD accounts. It does not have sensors that monitor your on-premises domain controllers.
Why C (Microsoft Defender for Cloud) is Incorrect: Microsoft Defender for Cloud (formerly Azure Security Center) is a broad cloud security posture management (CSPM) and cloud workload protection (CWP) solution. It assesses the security configuration of your Azure resources (like VMs, SQL databases, App Services) and can also protect on-premises servers (via Azure Arc). It might detect that a server is vulnerable (e.g., “missing patches”), but it is not the specialized UEBA solution for Active Directory authentication traffic like Defender for Identity is.
Why D (Credential Guard) is Incorrect: Credential Guard is a prevention and hardening feature that runs on a server. It uses VBS to protect the LSASS process and prevent PtH from succeeding on that specific server. It is not a monitoring or detection solution that analyzes network traffic for the entire domain. It’s a “shield,” whereas Defender for Identity is the “security camera system.”
Question 156. Your organization is implementing a two-node Storage Spaces Direct (S2D) cluster running Windows Server 2022. This cluster will be used at a remote, “lights-out” edge location. The primary concern is resilience; the cluster must be able to withstand the failure of an entire server or the failure of a single disk in each server at the same time and keep the storage volumes online. Which S2D resiliency feature is designed to provide this level of protection, specifically for a two-node cluster?
A) Nested Resiliency
B) Three-way Mirror
C) Dual Parity
D) Storage Replica
Correct Answer: A
Explanation:
The correct answer is A, Nested Resiliency. This is a special resiliency type introduced in Windows Server 2019, specifically for two-node Storage Spaces Direct (S2D) clusters, to provide enhanced fault tolerance.
Why A (Nested Resiliency) is Correct: Standard resiliency options (like a two-way mirror) in a two-node cluster are vulnerable. If you have a two-way mirror, you have two copies of the data (one on each node). If one node fails, you are “down to one copy.” If, during that time, a disk fails on the remaining node, you lose your datA) Nested Resiliency was created to solve this. Nested Resiliency combines two types of resiliency into one:
Two-way Mirror: First, it creates a standard two-way mirror, with one copy of the data on each of the two nodes.
Local Parity: Then, within each node, it also adds local parity (similar to RAID-5 or RAID-6) across the drives on that node.
The result is a “nested” resilience. To write data, it has to be mirrored to the other node and parity has to be calculated on both nodes. The benefit is exactly what the scenario asks for:
Case 1: Entire Server Failure: If “Node-01” fails, “Node-02” still has a complete, self-contained copy of all the data (protected by its own local parity) and can stay online.
Case 2: Simultaneous Disk Failures: If “Disk-A” in “Node-01” fails and “Disk-B” in “Node-02” fails at the same time, the cluster stays online. The two-way mirror is still intact (as it can read from the other disks on each node), and the local parity on each node can reconstruct the data from the failed disk.
This provides a significantly higher level of fault tolerance for small two-node edge clusters.
Why B (Three-way Mirror) is Incorrect: A three-way mirror requires a minimum of three nodes (to store the three copies on three different fault domains). It is not a possible or valid resiliency type for a two-node cluster.
Why C (Dual Parity) is Incorrect: Dual parity (like three-way mirror) requires more nodes to be effective and fault-tolerant. A parity-based S2D solution requires a minimum of four nodes. It is not an option for a two-node cluster.
Why D (Storage Replica) is Incorrect: Storage Replica is a technology for replicating volumes between two separate servers or clusters. It is not a resiliency type within a Storage Spaces Direct (S2D) pool. S2D uses its own internal resiliency (mirrors, parity). You might use Storage Replica to replicate the data from this S2D cluster to another cluster, but it’s not how you configure the S2D volume itself.
Question 157 .You are managing a large hybrid environment. Your on-premises Windows Server 2022 machines have been onboarded as Azure Arc-enabled servers. The security team has a new requirement: all servers must have the “Password complexity” security setting enforced, and the “Remote Desktop (RDP)” service must be disableD) You want to use Azure to audit all 500 servers for this configuration and automatically remediate any servers that are non-compliant. Which Azure service, enabled via Azure Arc, should you use to assign and enforce this kind of in-guest operating system configuration?
A) Azure Automation Update Management
B) Microsoft Defender for Identity
C) Azure Policy Guest Configuration
D) Azure Site Recovery (ASR)
Correct Answer: C
Explanation:
The correct answer is C, Azure Policy Guest Configuration. This is the specific feature of Azure Policy designed to audit and enforce settings inside a virtual or physical machine.
Why C (Azure Policy Guest Configuration) is Correct: Azure Arc-enabled servers allow you to extend the Azure control plane to your on-premises machines. A primary benefit of this is using Azure Policy.
Azure Policy: This is the service for governance at scale. You assign “Policy Definitions” (e.g., “Audit machines that do not have RDP disabled”) to a scope (like a subscription or resource group).
Guest Configuration: Standard Azure Policy can only check the properties of the Azure resource (e.g., “does this VM have a tag?”). To look inside the operating system, you use the Guest Configuration (GC) feature.
Arc Integration: The Azure Arc agent (Connected Machine Agent) manages the Guest Configuration extension on the server.
Audit and Remediate: You can assign a built-in or custom Guest Configuration policy. The policy has two modes:
Audit: The GC agent on the server will check its internal configuration (e.g., read the registry key for RDP or the security policy for password complexity) and report “Compliant” or “Non-Compliant” back to Azure Policy. This gives you a central dashboard of your compliance state.
Remediate (DeployIfNotExists): For “DeployIfNotExists” or “Modify” policies, the GC agent can be configured to automatically fix the non-compliant setting. For example, if it finds RDP enabled, it can run a Desired State Configuration (DSC) script to set the service to “DisableD)”
This provides the exact “audit and automatically remediate” workflow for in-guest settings that the prompt requires.
Why A (Azure Automation Update Management) is Incorrect: Update Management, a feature of Azure Automation, is used exclusively for patching the operating system (e.g., installing monthly Windows security updates). It does not manage or enforce security configurations like password complexity or service statuses.
Why B (Microsoft Defender for Identity) is Incorrect: This is a security monitoring tool for Active Directory. It detects threats in authentication traffiC) It does not audit or enforce configuration settings inside a server’s OS.
Why D (Azure Site Recovery – ASR) is Incorrect: ASR is a disaster recovery service. It replicates virtual machines to Azure to provide business continuity. It has no role in configuration management or policy enforcement on a running server.
Question 158. You need to implement a high-security “server isolation” policy for a three-tier application. The application consists of a “Web” tier, an “App” tier, and a “Database” tier, all running on Windows Server 2022. The security requirements are:
Web servers must only be able to communicate with App servers on TCP port 8080.
- App servers must only be able to communicate with Database servers on TCP port 1433.
- No other communication is allowed between these tiers, and they must be isolated from the rest of the network. You must implement this using a host-based firewall. Which feature of the “Windows Defender Firewall with Advanced Security” should you use to create and enforce these rules?
A) Connection Security Rules (IPsec)
B) AppLocker publisher rules
C) Standard Inbound and Outbound Rules
D) Credential Guard
Correct Answer: A
Explanation:
The correct answer is A, Connection Security Rules (IPsec). While standard firewall rules can block ports, they cannot easily create an authenticated “allow” policy that also isolates servers. Connection Security Rules are designed for this exact “domain isolation” or “server isolation” scenario.
Why A (Connection Security Rules – IPsec) is Correct: Standard firewall rules (Option C) are “all or nothing.” You could create an inbound rule on the App server to allow TCP 8080. But by default, this rule would allow traffic from any source. You could try to lock it down by IP address, but this is brittle and hard to manage. Connection Security Rules are different. They use IPsec to create a “secure channel” between trusted computers before the standard firewall rules are even evaluateD) Here is the workflow you would implement:
Define Groups: You would use Active Directory to create groups, e.g., “Tier-Web,” “Tier-App,” and “Tier-DB)”
Create Connection Security Rules (GPO): You would create GPOs that apply to each tier.
Rule 1 (Web-to-App): A rule on the “Tier-Web” servers that “requests” authentication when talking to “Tier-App” servers on port 8080. A corresponding rule on “Tier-App” “requires” authentication from “Tier-Web” on that port. This rule uses IPsec (Kerberos authentication) to prove the servers are who they say they are.
Rule 2 (App-to-DB): A similar rule for “Tier-App” and “Tier-DB” on port 1433.
Firewall “Block” Rules: Finally, you create standard firewall rules (Inbound and Outbound) that block all traffiC).. with an exception. You check the box that says “Allow if the connection is secure (authenticated).”
The result is a “secure-by-default” policy. The only traffic allowed is traffic that has been authenticated via the IPsec Connection Security Rules. This isolates the tiers from each other (the Web server cannot talk to the DB server) and from the rest of the network (a rogue server cannot talk to the App server, as it cannot authenticate).
Why B (AppLocker publisher rules) is Incorrect: AppLocker is an application control technology. It controls what applications can run on a server. It has no control over network communication or ports.
Why C (Standard InDbound and Outbound Rules) is Incorrect: As explained above, you could use standard rules, but they are not the best-practice solution for isolating trusted servers. They are typically based on IP addresses, which are not secure (can be spoofed) and are difficult to manage. The requirement for a secure, isolated “tier” system strongly implies the use of IPsec-based Connection Security Rules.
Why D (Credential Guard) is Incorrect: Credential Guard is a credential protection technology. It protects the LSASS process from credential-theft attacks. It has no control over network firewall rules.
Question 159. You are an administrator for a small business. You have a single on-premises physical Windows Server 2019 server that acts as a file server. You do not have a System Center or a Hyper-V environment. You need to implement a cloud-based backup solution using Azure Backup. Your primary goal is to back up specific critical files and folders (e.g., D:\UserData and E:\Shared) to an Azure Recovery Services vault. You do not need to back up the entire server (bare-metal) or its system state. Which Azure Backup agent or component should you install on this server?
A) The Azure Site Recovery (ASR) Mobility Service
B) The Microsoft Azure Recovery Services (MARS) Agent
C) The Azure Monitor Agent (AMA)
D) Microsoft Azure Backup Server (MABS)
Correct Answer: B
Explanation:
The correct answer is B, the Microsoft Azure Recovery Services (MARS) Agent. The MARS agent is designed specifically for file and folder level backup from on-premises Windows machines directly to Azure.
Why B (The MARS Agent) is Correct: The Microsoft Azure Recovery Services (MARS) agent is a lightweight, simple agent that is ideal for this scenario.
Direct-to-Cloud: You install the agent directly on the Windows Server (or Windows Client) machine you want to protect.
File/Folder/System State: The MARS agent is only capable of backing up three things: Files, Folders, and System State. It cannot perform a full “bare-metal” or “VM” backup.
Meets Requirement: The scenario’s requirement is to only back up “specific critical files and folders” (D:\UserData, E:\Shared). This is the exact use case for the MARS agent.
No Local Infrastructure: It does not require any additional on-premises servers, System Center, or Hyper-V. It communicates directly from the server to the Azure Recovery Services vault over the internet (HTTPS).
This is the simplest, most direct, and most cost-effective solution for file-level backup to Azure.
Why A (The ASR Mobility Service) is Incorrect: The ASR Mobility Service is for disaster recovery (Azure Site Recovery), not backup. Its purpose is to replicate the entire server (all disk I/O) to Azure to allow you to failover and run the server in Azure in the event of a disaster. It is not a backup tool for file-level restore.
Why C (The Azure Monitor Agent – AMA) is Incorrect: The AMA is a monitoring agent. Its purpose is to collect logs and performance metrics (e.g., event logs, CPU usage) and send them to Azure Monitor for analysis. It has no backup capabilities.
Why D (Microsoft Azure Backup Server – MABS) is Incorrect: Microsoft Azure Backup Server (MABS) is a much more powerful, and complex, solution. MABS is a free, on-premises “server” product (a re-branded version of System Center DPM) that you would install on a separate server. It is a full “Disk-to-Disk-to-Cloud” solution. It performs application-aware backups (SQL, Exchange, SharePoint) and full bare-metal/VM backups. It backs them up locally to its own disks first (for fast local restores) and then sends a copy to the Azure vault. This is massive overkill for a simple file/folder backup from a single server.
Question 160. You are deploying a new Storage Spaces Direct (S2D) cluster running Windows Server 2022. The servers are equipped with network adapters that support RDMA (Remote Direct Memory Access). You have chosen to use adapters that support the RoCE (RDMA over Converged Ethernet) protocol. To ensure the RDMA traffic is reliable and to prevent the packet loss that RoCE is highly sensitive to, which corresponding technology must you meticulously configure on your “Top-of-Rack” (ToR) physical network switches?
A) Data Center Bridging (DCB)
B) Switch Embedded Teaming (SET)
C) Link Layer Discovery Protocol (LLDP)
D) SMB Multichannel
Correct Answer: A
Explanation:
The correct answer is A Data Center Bridging (DCB). This is a set of IEEE standards that are a mandatory prerequisite for creating a “lossless” or “near-lossless” Ethernet fabric, which is required by the RoCE protocol.
Why A (Data Center Bridging – DCB) is Correct: Storage Spaces Direct (S2D) uses the SMB 3 protocol (specifically SMB Direct) for all its inter-node storage traffiC) To get the best performance, SMB Direct will use RDMA, which allows one server’s network card to write data directly into another server’s memory, bypassing the CPU and OS kernel.
RDMA Protocols: There are two main RDMA protocols: iWARP and RoCE.
RoCE’s Weakness: The prompt specifies RoCE (RDMA over Converged Ethernet). RoCE is a very lightweight protocol that runs directly over Ethernet (or UDP). It does not have the native congestion control and retransmission mechanisms of a protocol like TCP.
The “Lossless” Requirement: Because of this, RoCE is extremely sensitive to packet loss. A single dropped packet can cause a connection to stall or crash, which is catastrophic for a storage fabriC)
DCB is the Solution: Data Center Bridging (DCB) is the solution to this problem. DCB is a suite of technologies for physical switches that creates a “lossless” fabriC) The most important part of DCB is Priority-based Flow Control (PFC – IEEE 802.1Qbb). You configure PFC on the switches (and the server NICs) to create a special, high-priority “lane” for the RoCE traffiC) If a switch’s buffer for this lane starts to get full (congested), it sends a “PAUSE” frame back to the server, telling it to “stop sending RoCE traffic” for a few microseconds. The server pauses, the congestion clears, and the switch sends a “RESUME” frame. This process prevents the switch buffer from overflowing and dropping a packet.
Therefore, configuring DCB (specifically PFC) on the physical switches is a non-negotiable, mandatory step for a successful RoCE-based S2D deployment.
Why B (Switch Embedded Teaming – SET) is Incorrect: Switch Embedded Teaming (SET) is a Windows Server feature. It is the technology you use on the server to team your physical network adapters for use by the Hyper-V virtual switch. It has nothing to do with the configuration of the physical ToR switches.
Why C (Link Layer Discovery Protocol – LLDP) is Incorrect: LLDP is a protocol that allows network devices to advertise their identity and capabilities to their neighbors. While it is used by DCB (to exchange and verify DCB settings), it is not the feature that provides the lossless fabriC) It’s a discovery protocol, not a flow-control protocol.
Why D (SMB Multichannel) is Incorrect: SMB Multichannel is a feature of the SMB protocol (software). It automatically detects and uses multiple network paths between a server and client to aggregate bandwidth and provide fault tolerance. S2D uses SMB Multichannel, but this is an application-level feature. It relies on the underlying network (which, in this case, must be made lossless by DCB) to function correctly.