Question 121. Your organization maintains a VMware vSphere environment on-premises and utilizes Azure for disaster recovery. You are tasked with implementing a disaster recovery strategy for several critical VMware virtual machines using Azure Site Recovery (ASR). Before you can begin replicating virtual machines, you must deploy a specific on-premises component to manage communication, data replication, and orchestration. Which component must be deployed first within your vSphere environment?
A) The Azure Arc connected machine agent
B) The ASR Configuration Server
C) The Azure Site Recovery Mobility Service
D) The Azure Recovery Services Agent (MARS)
Correct Answer: B
Explanation:
The correct answer is B, the ASR Configuration Server. This is the cornerstone component required when protecting VMware virtual machines or physical servers using Azure Site Recovery. The deployment of the Configuration Server is a mandatory prerequisite before any replication can be initiated.
Why B (The ASR Configuration Server) is Correct: The ASR Configuration Server is a high-capacity on-premises appliance (deployed as a VMware virtual machine via an OVF template) that serves multiple critical functions. It is the central management and communication hub between the on-premises vSphere environment and the Azure Site Recovery service. Its primary roles include:
Coordination: It acts as the central point of orchestration for the entire replication and failover process. It communicates with the on-premises vCenter Server to discover virtual machines and with the Azure Recovery Services vault to receive replication policies and commands.
Embedded Process Server: By default, the Configuration Server also includes the Process Server role. The Process Server is the “workhorse” of replication. It receives replication data from the protected source machines (via the Mobility Service), and then caches, compresses, encrypts, and transmits this data to the target Azure storage account (cache storage account) associated with the Recovery Services vault. For larger deployments, the Process Server role can be scaled out to separate virtual machines to handle a higher replication load.
Embedded Master Target Server: The Configuration Server also contains the Master Target Server role, which is essential for failback operations. When failing back from Azure to the on-premises vSphere environment, the Master Target Server receives the replication data from Azure and writes it back to the on-premis.
Therefore, deploying the Configuration Server is the foundational first step. It establishes the bridge, the management plane, and the initial data replication pipeline necessary to protect VMware workloads.
Why A (The Azure Arc connected machine agent) is Incorrect: The Azure Arc connected machine agent is used to onboard on-premises servers (both physical and virtual) into Azure Arc-enabled servers. The primary purpose of Azure Arc is to extend the Azure control plane (Azure Policy, Azure Monitor, Microsoft Defender for Cloud, Azure Automation) to hybrid and multi-cloud resources. While incredibly useful for hybrid management, governance, and security, it has no direct role in the data replication or orchestration process of Azure Site Recovery. ASR is a distinct disaster recovery service, whereas Arc is a management and governance service.
Why C (The ASR Mobility Service) is Incorrect: The ASR Mobility Service is indeed a critical component, but it is not the first component deployed. The Mobility Service is a lightweight agent that must be installed on each individual VMware virtual machine (or physical server) that you intend to protect. Its job is to capture all disk I/O (writes) on the source machine in real-time and forward this data to the Process Server (which, as discussed, is part of the Configuration Server). The installation of the Mobility Service is typically orchestrated from the Configuration Server after the Configuration Server has been deployed, registered with the vault, and has discovered the vCenter inventory. You cannot install and direct the Mobility Service to a non-existent Process Server.
Why D (The Azure Recovery Services Agent (MARS)) is Incorrect: The Azure Recovery Services Agent, commonly known as the MARS agent, is used for a completely different service: Azure Backup. The MARS agent is installed on on-premises Windows Servers or Windows clients to back up specific files, folders, and system state directly to an Azure Recovery Services vault. It is a file-level backup solution and has no capability to perform full-machine replication, orchestration, or failover/failback of entire virtual machines, which is the exclusive domain of Azure Site Recovery. Using the MARS agent would not provide disaster recovery for a VMware VM.
Question 122. You are designing a new four-node, hyper-converged Storage Spaces Direct (S2D) cluster running Windows Server 2022. The servers are equipped with 100 GbE network adapters that support Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE). To ensure the S2D storage network achieves optimal, low-latency performance and to prevent packet loss, which networking feature must be meticulously configured on the physical network switches and the server network adapters?
A) SMB Multichannel
B) Switch Embedded Teaming (SET)
C) Data Center Bridging (DCB)
D) Receive Side Scaling (RSS)
Correct Answer: C
Explanation:
The correct answer is C, Data Center Bridging (DCB). This is a set of IEEE standards crucial for creating a “lossless” or “near-lossless” fabric over standard 10/25/100 GbE Ethernet, which is a prerequisite for high-performance storage protocols like RDMA over Converged Ethernet (RoCE).
Why C (Data Center Bridging – DCB) is Correct: Storage Spaces Direct (S2D) heavily relies on the SMB 3 protocol (specifically SMB Direct) for inter-node storage communication (e.g., cluster shared volume traffic, data rebalancing). To achieve the lowest possible latency and highest throughput, S2D is designed to use RDMA) RDMA allows one server’s network adapter to write data directly into the memory of another server, bypassing the CPU, kernel, and traditional networking stack on both the sending and receiving ends. This results in massive performance gains.
However, the RoCE (RDMA over Converged Ethernet) protocol is highly sensitive to packet loss. Unlike standard TCP/IP, which has robust mechanisms to handle dropped packets and retransmissions, RoCE assumes a reliable, lossless network fabriC) If packets are dropped, RoCE performance can degrade catastrophically, often leading to connection stalls and a complete breakdown of the storage fabriC
This is where Data Center Bridging (DCB) becomes essential. DCB is not a single feature but a suite of technologies that includes:
Priority-based Flow Control (PFC – IEEE 802.1Qbb): This is the most critical part. PFC allows you to create eight different priority classes for traffiC) You can configure PFC to pause traffic for a specific priority class (e.g., the RoCE/SMB Direct traffic) without pausing other traffic classes (like cluster heartbeat or management traffic). If a switch buffer begins to fill, it sends a “PAUSE” frame to the server’s network adapter for that priority class, preventing the server from sending more data until the congestion clears. This effectively stops packet loss before it happens.
Enhanced Transmission Selection (ETS – IEEE 802.1Qaz): This allows you to guarantee a minimum amount of bandwidth for specific traffic classes, ensuring that high-priority storage traffic is never starved by low-priority bulk data traffiC
To implement S2D with RoCE adapters, you must configure DCB (specifically PFC) end-to-end: on the server network adapters (via PowerShell cmdlets) and, crucially, on every physical network switch port that the storage adapters connect to.
Why A (SMB Multichannel) is Incorrect: SMB Multichannel is a feature of the SMB 3 protocol, not a network-level configuration. It is enabled by default and automatically used by S2D. SMB Multichannel allows a single SMB session to create multiple TCP/IP connections over multiple network paths (e.g., two 100 GbE ports). This aggregates bandwidth and provides fault tolerance. While S2D uses SMB Multichannel, configuring DCB is the specific action required to make the underlying RDMA protocol (which SMB Multichannel will use via SMB Direct) reliable. SMB Multichannel itself does not prevent packet loss.
Why B (Switch Embedded Teaming – SET) is Incorrect: Switch Embedded Teaming (SET) is the network adapter teaming technology used in Windows Server for Hyper-V virtual switches. It allows you to team up to eight physical network adapters into a single logical “team” for use by the virtual switch. It provides load balancing and fault tolerance for virtual machine traffic and the host operating system. While you would absolutely use SET on the S2D nodes to team your adapters, SET is not the feature that enables lossless Ethernet for RDMA) The configuration of DCB is a separate step applied to the physical adapters (or the virtual adapters on the host) in addition to configuring SET.
Why D (Receive Side Scaling – RSS) is Incorrect: Receive Side Scaling (RSS) is a network adapter driver technology that distributes the load of processing incoming network traffic across multiple CPU cores. This prevents a single CPU core from becoming a bottleneck in high-speed networking scenarios. Like SMB Multichannel, RSS is a standard feature that is enabled by default and is beneficial for performance, but it does not address the core problem of RDMA’s intolerance to packet loss. DCB is the specific technology required to create the lossless fabriC)
Question 123. Your company has a large number of Windows Server 2016 and Windows Server 2019 servers running in an on-premises data center. The IT security team wants to enforce granular security settings, compliance standards, and configuration baselines on these servers using the same Azure Policy definitions that are applied to your cloud-native Azure VMs. You must implement a solution that allows Azure Policy to audit and enforce settings on these on-premises servers. What is the primary service you must use to accomplish this?
A) Implement Azure Site Recovery and replicate the servers to Azure.
B) Deploy the Azure Log Analytics agent and connect it to a workspace.
C) Onboard the on-premises servers to Azure Arc-enabled servers.
D) Configure Just Enough Administration (JEA) endpoints on all servers.
Correct Answer: C
Explanation:
The correct answer is C, onboard the on-premises servers to Azure Arc-enabled servers. This is the designated Microsoft solution for extending the Azure control plane, including Azure Policy, to machines running outside of Azure.
Why C (Onboard to Azure Arc-enabled servers) is Correct: Azure Arc is a hybrid cloud platform that extends Azure services and management capabilities to any infrastructure, including on-premises data centers (with VMware or Hyper-V), other public clouds, or edge locations.
The “Azure Arc-enabled servers” feature works by installing the Azure Connected Machine agent on your Windows or Linux servers. Once this agent is installed, registered, and connected, the on-premises server appears as a native Azure resource within the Azure Resource Manager (ARM). This is a pivotal change.
Because the server is now represented as a first-class ARM resource, you can target it with many standard Azure management services, just as you would an Azure VM. The most prominent of these services is Azure Policy. You can assign Azure Policy definitions (both built-in and custom) to a resource group or subscription containing your Azure Arc-enabled servers. The agent on the server will then communicate with the Azure Policy service to evaluate its configuration.
For Windows Servers, Azure Policy uses a guest configuration feature (which is an extension managed by the Arc agent) to audit settings inside the operating system. It can check registry keys, service statuses, software installations, and more. Furthermore, it can be used for “enforcement” (DeployIfNotExists or Modify policies) to automatically remediate non-compliant settings, often by triggering Azure Automation runbooks or deploying Desired State Configuration (DSC) scripts. This directly fulfills the requirement to apply and enforce Azure Policy definitions.
Why A (Implement Azure Site Recovery) is Incorrect: Azure Site Recovery (ASR) is a disaster recovery (DR) service. Its purpose is to replicate on-premises virtual machines (or physical servers) to Azure to provide business continuity in the event of an on-premises outage. While the replicated server data exists in Azure, ASR does not make the live, running, on-premises server a manageable Azure resource. You could only apply Azure Policy to the replica after a failover, which does not meet the requirement of managing the active on-premises servers.
Why B (Deploy the Azure Log Analytics agent) is Incorrect: Deploying the Log Analytics agent (also known as the Microsoft Monitoring Agent or MMA) is a component of the Azure Monitor service. Its primary function is to collect logs, performance data, and telemetry from on-premises servers and send that data to a Log Analytics workspace for analysis, querying (using KQL), and alerting. While Azure Monitor is often used alongside Azure Arc, the agent itself does not make the server an ARM resource. The Log Analytics agent is for data ingestion and monitoring, not for control plane management and policy enforcement. The Azure Arc agent is required to enable Azure Policy.
Why D (Configure Just Enough Administration – JEA) is Incorrect: Just Enough Administration (JEA) is a powerful security feature within Windows Server, built on PowerShell. Its purpose is to implement the principle of least privilege for administrative tasks. JEA allows you to create constrained PowerShell endpoints where users can run only specific pre-approved commands, cmdlets, or functions (e.g., “Restart-Service”) without giving them full local administrator rights. JEA is an on-premises security control for managing user access. It has no connection to extending the Azure control plane or applying Azure Policy to a server.
Question 124. You are an administrator for a large financial institution that needs to migrate a legacy file server, “FS-Legacy,” running Windows Server 2012. The source server hosts 8 TB of data across 25 SMB shares, with intricate share-level and NTFS permissions. The migration target is a new server, “FS-Modern,” running Windows Server 2022. The primary goals are to migrate all data, all share configurations, and all permissions with minimal downtime. The solution must also handle the server’s identity, automatically redirecting users to the new server after migration. Which Windows Server tool or feature is explicitly designed for this comprehensive migration scenario?
A) Robocopy (Robust File Copy)
B) Storage Migration Service (SMS)
C) Distributed File System Replication (DFS-R)
D) Azure File Sync
Correct Answer: B
Explanation:
The correct answer is B, Storage Migration Service (SMS). This feature, first introduced in Windows Server 2019 and managed through Windows Admin Center, is engineered specifically for this exact scenario: migrating legacy file servers to modern versions of Windows Server or to Azure.
Why B (Storage Migration Service – SMS) is Correct: The Storage Migration Service provides a holistic, wizard-driven solution that automates the entire migration process. It operates in three distinct phases:
Inventory: The SMS orchestrator (which can be a separate server or the destination server itself) contacts the source server(s). It inventories all data, all SMB share configurations (names, settings, etC)), all NTFS file/folder permissions, and network configurations. This phase allows youto assess the scope of the migration.
Transfer: SMS performs the bulk data transfer. It uses a high-performance, multi-threaded copy engine (which is significantly more advanced than Robocopy) to move the data from the source (FS-Legacy) to the destination (FS-Modern). This process is idempotent, meaning it can be re-run multiple times. The first run copies everything, and subsequent runs copy only the delta (new or changed files), which is perfect for minimizing the final cutover window. It meticulously copies all data and preserves the complex NTFS permissions.
Cutover: This is the most critical and powerful feature of SMS. During the cutover phase (which you initiate during a planned maintenance window), SMS performs the following actions:
It performs a final delta sync to catch any last-minute changes.
It stops the services on the source server (FS-Legacy).
It moves the identity of the source server. This means it renames the source server to a new, random name and assigns the original name (FS-Legacy) and IP address(es) to the destination server (FS-Modern).
It transfers all the SMB share configurations to the destination server.
The result is that when the cutover is complete, the new server “FS-Modern” is now “FS-Legacy” on the network. End-users and applications continue to access \\FS-Legacy\Share without even knowing a migration occurred. This automated identity transfer and configuration migration is what makes SMS the superior and correct choice.
Why A (Robocopy) is Incorrect: Robocopy is a powerful command-line utility for copying files. You could use Robocopy to copy the 8 TB of data and preserve NTFS permissions (using the /COPYALL or /SEC switches). However, Robocopy is a manual process. It would not migrate the 25 share configurations. You would have to manually script the recreation of every share, including all its specific settings (e.g., access-based enumeration, caching). Most importantly, Robocopy has no mechanism to perform the identity cutover. You would have to manually rename servers, manage DNS and SPN records, and face a significant risk of user disruption.
Why C (Distributed File System Replication – DFS-R) is Incorrect: DFS-R is a replication technology, not a migration tool. It is designed to keep file data synchronized between two or more active servers (DFS members) in real-time or on a schedule. While you could technically set up DFS-R between the old and new server to pre-seed the data, it is not its intended purpose. It does not migrate share configurations, and it has no cutover mechanism. In fact, DFS-R is often a source for migration to new technologies (like Azure File Sync), as it can be complex to manage.
Why D (Azure File Sync) is Incorrect: Azure File Sync is a hybrid service for centralizing file shares in Azure Files while keeping a local cache on-premises for performance. Its goal is to “tier” data to the cloud, not to perform a server-to-server migration. While you could use SMS to migrate into a server that is then enabled for Azure File Sync, Azure File Sync itself is not the tool you use to execute the migration from FS-Legacy to FS-Modern. It solves a different problem (hybrid file services) than SMS (server-to-server migration).
Question 125. You are the administrator for a 6-node Windows Server 2022 Hyper-V failover cluster. You need to implement a solution that automatically applies monthly Windows updates to all nodes in the cluster, one at a time, ensuring that Hyper-V roles are gracefully drained and moved to other nodes before a node is patched and rebooted. The entire process must be orchestrated to minimize disruption to running virtual machines. Which feature or technology should you configure to achieve this level of automated, cluster-aware patching?
A) Windows Server Update Services (WSUS)
B) Cluster-Aware Updating (CAU)
C) Azure Update Management
D) Desired State Configuration (DSC)
Correct Answer: B
Explanation:
The correct answer is B, Cluster-Aware Updating (CAU). This is a feature built directly into Windows Server Failover Clustering specifically designed to automate the patching of cluster nodes while maintaining service availability.
Why B (Cluster-Aware Updating – CAU) is Correct: Cluster-Aware Updating (CAU) provides an automated, “cluster-aware” solution for the entire update process. When an “Updating Run” is initiated (either manually or on a predefined schedule), CAU orchestrates the following complex workflow for each node in the cluster, one at a time:
Selects a Node: CAU selects the first node to update.
Drains Roles: It places the node into cluster maintenance mode. This is the critical step. Placing a node in maintenance mode automatically triggers a live migration of all running virtual machines (or other cluster roles) from that node to other available nodes in the cluster. This is done gracefully and without downtime for the VMs.
Applies Updates: Once the node is empty (has no active roles), CAU instructs the node to download and install the required updates. This can be from Windows Update, Microsoft Update, or an internal WSUS server.
Reboots (if necessary): If the updates require a reboot, CAU manages the restart of the node.
Rejoins Cluster: After the node comes back online, CAU verifies that it is healthy and brings it out of maintenance mode, making it available to host cluster roles again.
Repeats: CAU then moves to the next node in the cluster and repeats the entire process until all nodes are fully patched.
This workflow precisely matches the requirements: it’s automated, it handles draining and moving roles (VMs), and it minimizes disruption. CAU can be configured in two modes: “self-updating” (where the cluster pulls updates and manages the process itself on a schedule) or “remote-updating” (where an administrator or external orchestration tool kicks off the process).
Why A (Windows Server Update Services – WSUS) is Incorrect: WSUS is a repository and approval system for Windows updates. You would configure your cluster nodes (and all other servers) to get their updates from your WSUS server instead of the public Microsoft Update servers. This gives you control over which patches are approved. However, WSUS has no awareness of a failover cluster. It cannot orchestrate the draining of roles, placing nodes in maintenance mode, or patching nodes sequentially. If WSUS pushed updates to all 6 nodes at once, they might all try to reboot simultaneously, causing a complete cluster outage. CAU often uses WSUS as its update source, but CAU is the orchestrator.
Why C (Azure Update Management) is Incorrect: Azure Update Management, part of Azure Automation and often used with Azure Arc-enabled servers, is a cloud-based solution for managing updates across Azure VMs and on-premises servers. While it is a powerful tool for hybrid update management and can schedule update deployments, it is not natively cluster-aware in the same way CAU is. It can be configured to run pre/post-scripts, and you could write complex scripts to place a node in maintenance mode, but CAU is the out-of-the-box, purpose-built solution that handles this complexity automatically for failover clusters. Using CAU is the direct and supported method.
Why D (Desired State Configuration – DSC) is Incorrect: PowerShell Desired State Configuration (DSC) is a management platform for configuration as code. You use DSC to declare the desired state of a server (e.g., “this service must be running,” “this feature must be installed,” “this registry key must exist”). The Local Configuration Manager (LCM) on the server then works to enforce that state. While you could use DSC to ensure update-related services are running, it is not an update orchestration tool. It does not manage the workflow of downloading, installing, and rebooting for monthly patches, nor is it cluster-aware.
Question 126. Your organization is implementing a defense-in-depth security strategy for its Windows Server 2022 domain controllers. To mitigate Pass-the-Hash (PtH) and other credential theft attacks, you need to implement a solution that uses virtualization-based security (VBS) to isolate and protect the Local Security Authority Subsystem Service (LSASS) process, preventing attackers from dumping NTLM hashes and Kerberos TGTs from memory. Which Windows Server security feature must be enabled?
A) Just Enough Administration (JEA)
B) Windows Defender Application Control (WDAC)
C) Credential Guard
D) Shielded VMs
Correct Answer: C
Explanation:
The correct answer is C, Credential Guard. This feature is specifically designed to address the threat of credential theft attacks like Pass-the-Hash by using virtualization-based security (VBS).
Why C (Credential Guard) is Correct: Credential Guard is a high-impact security feature that directly mitigates credential theft. It leverages Windows virtualization-based security (VBS), which requires a Hyper-V hypervisor (even if no VMs are running) and a TPM (Trusted Platform Module) version 2.0.
Here is how it works:
VBS Environment: When enabled, VBS creates an isolated, “virtual secure mode” (VSM) that is separated from the normal Windows kernel.
LSASS Isolation: Credential Guard moves the core components of the Local Security Authority Subsystem Service (LSASS) that store sensitive credentials (like NTLM hashes and Kerberos Ticket-Granting Tickets) into this isolated VSM environment.
Process Protection: A small proxy process is left running in the normal operating system, but the actual “lsass.exe” process holding the secrets is now in VSM. The VSM is not accessible from the normal OS kernel, not even by code running with full kernel-level privileges (like a rootkit or a malicious driver).
Attack Mitigation: When an attacker compromises the server (even with administrator-level access) and tries to use tools like Mimikatz to dump memory and extract credentials from the LSASS process, they are unsuccessful. The process they can access (the proxy) contains no secrets. The actual secrets are locked away in the VSM, which is protected by the hypervisor.
This directly prevents the success of Pass-the-Hash and Pass-the-Ticket attacks, as the attacker can no longer steal the hashes or tickets from memory to impersonate users on other systems.
Why A (Just Enough Administration – JEA) is Incorrect: Just Enough Administration (JEA) is a security feature that helps implement the principle of least privilege for administrators. It uses PowerShell to create constrained administrative endpoints, allowing users to perform specific, pre-approved tasks (e.g., restart a service) without granting them full local administrator rights. While JEA is a critical part of a defense-in-depth strategy (it helps prevent the initial compromise), it does not protect credentials in memory if an attacker successfully gains administrative privilege.
Why B (Windows Defender Application Control – WDAC) is Incorrect: Windows Defender Application Control (WDAC), formerly known as Device Guard, is an application whitelisting technology. It creates policies that define which applications, scripts, and drivers are trusted to run on the system. Any code not explicitly trusted is blocked from executing. Like JEA, WDAC is an excellent defense (it could prevent the attacker’s “Mimikatz.exe” from running in the first place), but it is not the feature that uses VBS to isolate LSASS and protect credentials in memory.
Why D (Shielded VMs) is Incorrect: Shielded VMs are a security feature for the Hyper-V fabric, not for the host operating system or domain controllers directly (unless the DC itself is a VM). Shielded VMs protect the data inside a virtual machine from a compromised Hyper-V host administrator. It encrypts the VM’s state and data (using vTPM and BitLocker) so that a host admin cannot access the VM’s disk (VHDX) or inspect its memory. This is the inverse of the scenario; the question asks to protect the host OS (the domain controller) from attacks, not to protect a VM from the host.
Question 127. A new security policy at your company dictates that helpdesk staff must be able to perform routine administrative tasks on a sensitive file server running Windows Server 2019. These tasks include restarting specific services and clearing print queues. However, the staff must not be granted full Local Administrator rights on the server, nor should they be able to use RDP. You need to implement a solution that provides them with a limited, non-GUI, and highly-auditable set of commands. Which Windows security feature is the ideal solution for this requirement?
A) Role-Based Access Control (RBAC) in Windows Admin Center
B) Just Enough Administration (JEA)
C) AppLocker
D) Dynamic Access Control (DAC)
Correct Answer: B
Explanation:
The correct answer is B, Just Enough Administration (JEA). JEA is a PowerShell-based security technology specifically created to enable delegated administration for specific tasks, adhering to the principle of least privilege, which perfectly matches the scenario.
Why B (Just Enough Administration – JEA) is Correct: Just Enough Administration (JEA) is a feature built into PowerShell that allows you to create constrained endpoints. When a user connects to a JEA endpoint (e.g., via Enter-PSSession -ComputerName Server -ConfigurationName HelpdeskTasks), their session is severely restricted. Here is how JEA directly solves the problem:
Reduced Privilege: The user’s session runs as a temporary, virtual, non-administrator account. They do not use their own high-privilege credentials, and they are not local admins on the box. This directly meets the “must not be granted full Local Administrator rights” requirement.
Limited Commands: You define what the user can do using a Role Capability File (.psrc). In this file, you explicitly whitelist the exact cmdlets, functions, and external commands the user is allowed to run. For this scenario, you would only allow Restart-Service -Name Spooler, Get-Service, Get-PrintJob, Remove-PrintJob, etC) If they try to run Restart-Computer or Add-LocalGroupMember, the command will fail because it’s not in the allowed list.
No RDP/GUI: JEA is accessed purely through PowerShell remoting. This meets the “non-GUI” requirement and prevents RDP access.
Auditable: All commands run within a JEA session are automatically logged in detailed PowerShell transcripts and event logs, providing a clear audit trail of exactly what actions the helpdesk staff performed.
JEA provides a temporary, low-privilege, and highly-constrained administrative session, which is the precise definition of what is required.
Why A (Role-Based Access Control – RBAC) in Windows Admin Center) is Incorrect: Windows Admin Center (WAC) does have its own RBAC model. You can grant users “Reader” or “Contributor” access, or access to specific extensions. This could be a valid solution, as WAC also allows granular control. However, JEA is the more fundamental, built-in, and scriptable platform feature for this. The WAC RBAC model, in many ways, builds upon the concepts of JEA and constrained PowerShell. JEA is the core technology designed for command-line access, as specified by “non-GUI.” While WAC is a GUI that can provide limited access, JEA is the protocol-level non-GUI solution.
Why C (AppLocker) is Incorrect: AppLocker is an application whitelisting technology. It is used to control which applications (.exe), scripts (.ps1), and installers (.msi) users are allowed to run on a server. For example, you would use AppLocker to prevent the helpdesk staff from running powershell.exe at all in a normal session. It does not create a limited administrative session; it restricts a normal user session. It is a preventative control, not a delegated administration framework.
Why D (Dynamic Access Control – DAC) is Incorrect: Dynamic Access Control (DAC) is a technology framework focused on data governance and file access, not administrative tasks. DAC allows youto classify files (e.g., “Confidential”) using tags and then write complex access policies based on user claims (e.g., “User’s department = Finance”) and resource properties. It is used to control who can access data on a file server, not who can administer the file server.
Question 128. You are managing a two-site, active-active data center infrastructure. You need to deploy a new Windows Server 2022 failover cluster for a mission-critical SQL Server instance. The cluster nodes will be geographically distributed between the two sites, “SiteA” and “SiteB,” which are connected by a high-speed, low-latency dark fiber link. You are using Storage Spaces Direct (S2D) for storage. To provide synchronous data replication between the two sites and allow the SQL Server instance to automatically failover to either site, which S2D and clustering feature must you implement?
A) Hyper-V Replica
B) Stretch Cluster
C) Storage Replica with Asynchronous Replication
D) Azure Site Recovery (ASR)
Correct Answer: B
Explanation:
The correct answer is B, Stretch Cluster. The “Stretch Cluster” feature is the Microsoft terminology for a single failover cluster (often using Storage Spaces Direct) whose nodes are geographically distributed across different physical sites.
Why B (Stretch Cluster) is Correct: A Stretch Cluster is a high-availability and disaster-recovery solution combined. Here is how it works and why it fits the scenario:
Single Cluster, Two Sites: It is one single failover cluster. The nodes in “SiteA” and the nodes in “SiteB” are all members of the same cluster, “SQLCluster01.”
Site Awareness: The cluster is configured with “site awareness.” You define which nodes belong to SiteA and which belong to SiteB) This allows the cluster to enforce “site fault tolerance,” meaning it will try to keep at least one copy of clustered roles (like the SQL Server instance) active in each site if possible, or ensure it can failover to the other site.
Storage Replication (S2D): When you configure a Stretch Cluster with Storage Spaces Direct, S2D itself handles the storage replication. You configure S2D to perform synchronous replication between the two sites. This means when the SQL Server instance writes data to its Cluster Shared Volume (CSV) in SiteA, that I/O operation is simultaneously written to the storage on the nodes in SiteA and to the storage on the nodes in SiteB) The application (SQL Server) does not get the “write complete” acknowledgment until the data is safe in both locations. This ensures zero data loss (RPO=0) in the event of a full site failure.
Automatic Failover: Because it is one cluster and the data is synchronously replicated, if SiteA fails completely (e.g., power outage), the cluster services in SiteB will detect the failure, and the cluster will automatically bring the SQL Server instance online on a node in SiteB) The data will be 100% current.
This solution provides the active-active (or active-passive) site-level fault tolerance and automatic failover requested.
Why A (Hyper-V Replica) is Incorrect: Hyper-V Replica is a technology for replicating virtual machines from one Hyper-V host (or cluster) to another. It is asynchronous by default (with replication intervals of 30 seconds, 5, or 15 minutes), meaning some data loss is expected. It also requires a manual failover; it does not provide the automatic failover of a single cluster. It is a disaster recovery solution for VMs, not a high-availability solution for a cluster instance.
Why C (Storage Replica with Asynchronous Replication) is Incorrect: Storage Replica is a technology that can perform block-level replication (either synchronous or asynchronous) between volumes. It is often used to create a stretch cluster without S2D, or to replicate data between two separate clusters. However, the question specifies S2D is being used. S2D Stretch Clustering manages its own replication. Furthermore, the option specifies asynchronous replication, which would not provide the zero data loss (RPO=0) required for a mission-critical SQL Server in an automatic failover scenario. Synchronous replication is required.
Why D (Azure Site Recovery – ASR) is Incorrect: Azure Site Recovery (ASR) is a service for replicating workloads to Azure (or between on-premises VMM sites) for disaster recovery. It is not used to create a stretch cluster between two on-premises sites. ASR is about failing over to the cloud, whereas a stretch cluster is about failing over to another on-premises site.
Question 129. You are evaluating disaster recovery (DR) solutions for a Windows Server 2019 Hyper-V environment. You have a primary data center and a secondary DR data center connected by a 1 Gbps WAN link with 50ms latency. You need to replicate several critical application VMs. The business has stated that a maximum of 5 minutes of data loss (RPO) is acceptable, and the recovery process must be manually initiated by an administrator (RTO is flexible). The solution should be integrated with Hyper-V and not require third-party hardware or Azure services. Which technology is the most appropriate and cost-effective solution?
A) Storage Spaces Direct (S2D) Stretch Cluster
B) Hyper-V Replica
C) Azure Site Recovery (ASR)
D) Storage Replica with Synchronous Replication
Correct Answer: B
Explanation:
The correct answer is B, Hyper-V ReplicA) This feature is built into the Windows Server Hyper-V role and is designed for exactly this type of scenario: asynchronous, VM-level replication between two locations for disaster recovery.
Why B (Hyper-V Replica) is Correct: Hyper-V Replica (also known as HVR) is a VM-centric disaster recovery solution. Here is why it is the perfect fit:
Built-in and Cost-Effective: It is an included feature of the Hyper-V role. It does not require any additional licensing, complex storage (like S2D), or cloud services (like ASR). It can replicate VMs from one standalone host to another, from a cluster to a standalone host, or from one cluster to another.
Asynchronous Replication: HVR is an asynchronous replication technology. You can configure the replication frequency to be 30 seconds, 5 minutes, or 15 minutes. The scenario’s requirement for a 5-minute RPO (Recovery Point Objective) is perfectly met by selecting the 5-minute replication interval. This means that, at most, 5 minutes of data changes would be lost during a disaster.
Network Tolerance: Asynchronous replication is well-suited for WAN links with moderate latency, like the 50ms link described. It does not require a low-latency, high-bandwidth connection.
Manual Failover: The failover process for Hyper-V Replica is not automatiC) An administrator must manually initiate a “Planned Failover,” “Unplanned Failover,” or “Test Failover” from the Hyper-V Manager or Failover Cluster Manager. This aligns perfectly with the requirement that the “recovery process must be manually initiated.”
No Hardware/Azure Dependency: The solution does not require any specific storage hardware (it works on any storage Hyper-V can see) and, as the question states, does not require Azure services.
Why A (Storage Spaces Direct – S2D – Stretch Cluster) is Incorrect: An S2D Stretch Cluster is a synchronous replication, automatic failover, high-availability solution. It is massive overkill for this scenario. It requires very low latency (typically <5ms) between sites, which the 50ms link violates. It also provides an RPO of zero and automatic failover, which contradicts the stated requirements of a 5-minute RPO and manual failover.
Why C (Azure Site Recovery – ASR) is Incorrect: Azure Site Recovery is a powerful DR solution, but the question explicitly states the solution should “not require… Azure services.” ASR’s primary function is to replicate workloads to Azure. While it can be used to orchestrate replication between two on-premises VMM sites, this is a less common scenario and still involves the Azure-based orchestration plane. Hyper-V Replica is the simpler, non-Azure, built-in tool.
Why D (Storage Replica with Synchronous Replication) is Incorrect: Storage Replica is a block-level replication technology. The option specifies synchronous replication. Synchronous replication is intolerant of high latency; it requires a very fast, low-latency network (typically <5ms RTT) to function without severely degrading the performance of the primary application. The 50ms latency of the WAN link makes synchronous replication completely infeasible. While Storage Replica also supports asynchronous replication, this option specifically names synchronous, making it incorrect. Hyper-V Replica is simpler and operates at the VM level, which is more appropriate here.
Question 130. You are deploying a new Windows Server 2022 server, “SRV-Core,” using the Server Core installation option to minimize its attack surface and resource consumption. You need to manage this server remotely using a modern, graphical, web-based interface. You have another server, “MgmtSrv,” running Windows Server 2022 with the Desktop Experience, which is designated as your management gateway. What should you install and configure on “MgmtSrv” to provide this web-based management for “SRV-Core” and other servers in your environment?
A) Remote Server Administration Tools (RSAT)
B) System Center Virtual Machine Manager (SCVMM)
C) Windows Admin Center (WAC)
D) PowerShell Web Access
Correct Answer: C
Explanation:
The correct answer is C, Windows Admin Center (WAC). Windows Admin Center is Microsoft’s modern, browser-based, graphical management tool for Windows Server, perfectly suited for managing Server Core installations.
Why C (Windows Admin Center – WAC) is Correct: Windows Admin Center (WAC) is a flexible, locally-deployed, browser-based management platform. It is the designated successor to the in-box, built-in management tools (like Server Manager and the various MMC snap-ins).
Web-Based GUI: It provides a rich, graphical interface that runs in a modern web browser (like Microsoft Edge or Google Chrome). This directly meets the “graphical, web-based interface” requirement.
Server Core Management: WAC is the ideal tool for managing Server Core. Since Server Core has no local GUI, a remote GUI tool is essential. WAC provides a comprehensive set of tools for managing nearly every aspect of the server: certificates, devices, event logs, files, firewall, installed apps, local users/groups, networking, performance monitoring, PowerShell, registry, services, storage, and updates.
Gateway Mode: You install WAC on a management server (like “MgmtSrv,” running with Desktop Experience) in “Gateway Mode.” This gateway server then proxies the connections (using WinRM over HTTPS) to all the target servers you want to manage (like “SRV-Core”). Administrators connect their browsers to the gateway, authenticate, and then select the server to manage. This is a secure and centralized management model.
Modern Platform: WAC is actively developed and includes integrations for hybrid Azure services (like Azure Monitor, Azure Backup, and Azure File Sync), making it a key component of a hybrid infrastructure.
Why A (Remote Server Administration Tools – RSAT) is Incorrect: RSAT is a collection of the traditional MMC snap-ins (like Server Manager, Active Directory Users and Computers, DNS Manager, etC)) that you install on a client operating system (like Windows 11) or a server with Desktop Experience. While RSAT is a graphical tool for remote management, it is not “web-based.” It is a set of thick-client applications. WAC is the modern, web-based replacement.
Why B (System Center Virtual Machine Manager – SCVMM) is Incorrect: SCVMM is a component of the System Center suite. It is a very powerful, enterprise-grade management solution, but it is focused specifically on managing the virtualization fabric: Hyper-V hosts, clusters, networking (SDN), and storage (S2D). It is not a general-purpose server management tool for tasks like managing files, services, or event logs on a single Server Core instance. It would be massive overkill and the wrong tool for the joB)
Why D (PowerShell Web Access) is Incorrect: PowerShell Web Access is a feature in Windows Server that provides a web-based PowerShell console. It is literally a PowerShell command prompt inside a web browser. While this is web-based and can manage Server Core, it is not a “graphical” interface as the question specifies. It is a text-based, command-line interface, which is the opposite of what was asked. WAC provides the graphical tools and an integrated PowerShell console.
Question 131. Your organization uses Windows Admin Center (WAC) as its primary management tool, operating in gateway mode from a dedicated management server. A new security policy requires that all WAC users authenticate using their Active Directory credentials and a multi-factor authentication (MFA) challenge. You are also leveraging other Azure hybrid services. How can you enforce MFA for access to the Windows Admin Center gateway?
A) Configure WAC to use Azure Active Directory (Azure AD) authentication and enable a Conditional Access policy.
B) Implement Smart Card authentication for the WAC gateway’s web server certificate.
C) Configure the Windows Admin Center gateway to run in “service mode” on a domain controller.
D) Enable “Just Enough Administration” (JEA) on the WAC gateway server.
Correct Answer: A
Explanation:
The correct answer is A) Windows Admin Center can be natively integrated with Azure Active Directory (Azure AD) to enforce modern authentication controls, including Multi-Factor Authentication (MFA) through Conditional Access policies.
Why A (Configure WAC to use Azure AD authentication…) is Correct: Windows Admin Center (WAC) has a first-class integration with Azure AD for gateway authentication. Instead of relying solely on on-premises Active Directory credentials (which prompts for a simple username/password), you can configure the WAC gateway to use Azure AD. This process involves:
Registering an App: You register the WAC gateway as an application within your Azure AD tenant.
Configuring WAC: You configure the WAC gateway installation to use Azure AD for authentication, pointing it to the registered application’s details.
Enforcing MFA: Once WAC is using Azure AD to authenticate users, you can leverage the full power of Azure AD’s security features. You create a Conditional Access policy in Azure AD. This policy can be configured to target the specific “Windows Admin Center” application and state that “all users (or specific groups) accessing this application must satisfy the ‘Require multi-factor authentication’ grant control.”
After this is configured, when a user browses to the WAC gateway URL, they will be redirected to the standard Azure AD login page, be required to enter their AAD credentials, and then be prompted for their MFA challenge (e.g., a push notification on their phone, a 6-digit code) before they are granted access to the WAC interface. This precisely meets the requirement.
Why B (Implement Smart Card authentication…) is Incorrect: Implementing Smart Card authentication (which is a form of two-factor authentication) would be a complex configuration applied to Active Directory and the gateway server’s IIS configuration (if WAC is hosted in IIS, which isn’t the default). While technically a form of strong authentication, it is not the same as the “MFA” (implying cloud-based MFA) requested. The most direct, modern, and Azure-integrated way to achieve this is via Azure AD, especially since the organization is already using other Azure hybrid services.
Why C (Configure the WAC gateway to run in “service mode”…) is Incorrect: Running WAC in “service mode” on a domain controller is not a valid configuration and is highly discouraged from a security perspective (you should never run management gateways or web servers on a DC). Furthermore, the WAC operating mode (standalone, gateway, or service) does not inherently control the authentication method. It’s a nonsensical option.
Why D (Enable “Just Enough Administration” – JEA – on the WAC gateway) is Incorrect: Just Enough Administration (JEA) is a technology for constraining what a user can do via PowerShell after they have already authenticated. It limits their commands to a pre-approved set. It has absolutely no bearing on the initial authentication process to the WAC web interface. JEA secures PowerShell endpoints, not the WAC web application’s login mechanism.
Question 132. You are tasked with securing a Windows Server 2022 deployment against unknown and zero-day malware. The organization wants a solution that enforces a “default-deny” posture, where only explicitly approved and signed applications and drivers are allowed to execute. The solution must be capable of using virtualization-based security (VBS) to protect its own policies from being tampered with by an administrator. Which combination of technologies should you implement?
A) AppLocker with default rules and Credential Guard.
B) Windows Defender Application Control (WDAC) with a code integrity policy and Hypervisor-Protected Code Integrity (HVCI).
C) BitLocker Drive Encryption and Windows Defender Antivirus.
D) Just Enough Administration (JEA) and Windows Firewall.
Correct Answer: B
Explanation:
The correct answer is B) Windows Defender Application Control (WDAC) is the application whitelisting solution that provides the “default-deny” posture, and Hypervisor-Protected Code Integrity (HVCI) is the specific VBS-backed feature that protects its policies.
Why B (WDAC with a code integrity policy and HVCI) is Correct: This option combines the two exact technologies designed for this.
Windows Defender Application Control (WDAC): This is Microsoft’s most robust application control and whitelisting solution. Unlike AppLocker, WDAC operates at a deeper level in the OS. You create a code integrity (CI) policy, which is essentially a whitelist of all trusted code (executables, DLLs, scripts, drivers). This policy is typically based on publisher certificates (e.g., “Trust all code signed by ‘Microsoft Corporation'”), file hashes, or folder paths. When the policy is “enforced,” the Windows kernel will not load or execute any code that does not match the policy. This provides the “default-deny” posture for unknown malware.
Hypervisor-Protected Code Integrity (HVCI): This is the virtualization-based security (VBS) component. HVCI (also known as Memory Integrity in Windows Security) moves the kernel-mode code integrity subsystem (the part that enforces the WDAC policy) into the isolated “virtual secure mode” (VSM), protected by the hypervisor. This means that even if an attacker gains full kernel-level (administrator) access to the server, they cannot tamper with the WDAC policy or inject malicious code into the kernel. The policy itself is protected by VBS, which meets the “protect its own policies from being tampered with” requirement.
Why A (AppLocker with default rules and Credential Guard) is Incorrect: AppLocker is an older application whitelisting technology. While good, it has known bypasses and is generally considered less secure than WDAC) More importantly, its policies are not protected by VBS. An administrator can easily disable AppLocker policies. Credential Guard is a VBS feature, but it protects credentials (LSASS), not application control policies. This option combines the wrong application control with the wrong VBS feature for the stated goal.
Why C (BitLocker Drive Encryption and Windows Defender Antivirus) is Incorrect: BitLocker provides data-at-rest encryption. It encrypts the hard drive so that if the drive is physically stolen, the data cannot be read. It does nothing to prevent malware from executing on a running system. Windows Defender Antivirus is a traditional, signature-based (and heuristic) anti-malware solution. It is “default-allow” and tries to block known threats, which is the exact opposite of the “default-deny” posture requested for unknown and zero-day threats.
Why D (Just Enough Administration – JEA – and Windows Firewall) is Incorrect: JEA is a least-privilege solution for administrative tasks, and Windows Firewall is a network filtering solution. Neither of these technologies controls which applications or drivers are allowed to execute on the server. They are irrelevant to the core requirement of application whitelisting.
Question 133. You are investigating a performance issue on a Windows Server 2019 file server. Users are reporting intermittent slowness. You suspect a storage bottleneck but Performance Monitor (PerfMon) only shows you real-time data, and Event Logs are reactive. You want to implement a solution that uses built-in machine learning models to locally analyze system data and predict future bottlenecks, such as CPU or Storage capacity exhaustion, before they occur. Which feature, installable via Server Manager or Windows Admin Center, provides this predictive analytic capability?
A) Azure Monitor
B) System Insights
C) Data Collector Sets in Performance Monitor
D) Storage Replica
Correct Answer: B
Explanation:
The correct answer is B, System Insights. This is a feature introduced in Windows Server 2019 that provides local predictive analytics.
Why B (System Insights) is Correct: System Insights is a feature designed specifically to bring local predictive analytics capabilities to Windows Server. It operates by:
Local Analysis: It runs entirely on the server itself, without requiring any cloud connectivity (though it can be viewed in Windows Admin Center).
Data Collection: It leverages existing system data, such as Performance Monitor counters and Event Logs.
Machine Learning: It uses built-in, local machine learning models to analyze this historical datA)
Forecasting: It provides predictive forecasts for key system resources. By default, it includes capabilities to forecast:
CPU capacity: Predicting when CPU usage will hit a sustained high threshold.
Network capacity: Predicting usage for physical network adapters.
Storage consumption: Predicting when a logical volume will run out of free space.
Total storage consumption: Predicting overall storage usage.
When a potential future issue is detected (e.g., “Volume C: is forecast to be full in 14 days”), it generates an event in the Event Log, which can then be used to trigger alerts or automated responses. This directly matches the requirement for a local, predictive solution for capacity exhaustion.
Why A (Azure Monitor) is Incorrect: Azure Monitor is Microsoft’s cloud-based monitoring and analytics platform. To use it, you would typically onboard the server using Azure Arc and the Log Analytics agent. Azure Monitor is incredibly powerful, using its own ML-based “Insights” (e.g., VM Insights) and anomaly detection, but the question implies a local solution (“locally analyze system data”). System Insights is the built-in, on-premises feature for this specific predictive task.
Why C (Data Collector Sets in Performance Monitor) is Incorrect: Data Collector Sets are a feature of Performance Monitor (PerfMon). They are used to collect and log performance counter data over a long period. You can schedule them to run, gather data, and save it to a log file. This provides the historical data needed for analysis, but PerfMon itself has no built-in machine learning or predictive forecasting engine. You would have to manually export the data and analyze it in another tool (like Excel) to try and make a forecast. System Insights automates this analysis.
Why D (Storage Replica) is Incorrect: Storage Replica is a disaster recovery and data replication feature. Its purpose is to perform block-level replication of volumes between servers or clusters. It has absolutely no function related to performance monitoring or predictive analytics.
Question 134. You are designing a hybrid file server solution for your organization. The goal is to consolidate several on-premises Windows file servers into a central, cloud-based repository to reduce on-premises storage and backup infrastructure. However, branch offices need to access these files with low latency, just as if they were on a local server. You also need to preserve the existing file structure and NTFS permissions. Which Azure hybrid service is designed to create a “cloud-tiered” cache of an Azure File Share on an on-premises Windows Server?
A) Azure Site Recovery (ASR)
B) Storage Migration Service (SMS)
C) Azure File Sync
D) Distributed File System (DFS)
Correct Answer: C
Explanation:
The correct answer is C, Azure File SynC) This service is purpose-built to synchronize on-premises Windows Servers with Azure File Shares, providing a tiered-storage cache.
Why C (Azure File Sync) is Correct: Azure File Sync provides a seamless bridge between on-premises Windows file servers and cloud-based Azure File Shares. It addresses all the requirements in the scenario:
Central Cloud Repository: The “golden copy” or authoritative source of all data is stored in an Azure File Share (the “Cloud Endpoint”). This consolidates the data in the cloud, simplifying backup (using Azure Backup for File Shares) and management.
Local Cache with Low Latency: You install the Azure File Sync agent on one or more on-premises Windows Servers (the “Server Endpoints”), such as in the branch offices. These servers “sync” with the cloud endpoint. Users in the branch office access the local server. This server maintains a cache of the data, providing LAN-speed, low-latency access.
Cloud Tiering: This is the key feature. To save local storage space, Azure File Sync “tiers” cold or infrequently accessed files. The file metadata (the filename, permissions) remains on the local server, making it look like the file is still there, but the data content is purged from the local disk and exists only in Azure. If a user tries to open a tiered file, the agent seamlessly recalls the data from Azure on demand.
Preserves Metadata: Azure File Sync synchronizes the full file structure, NTFS ACLs (permissions), and timestamps between the on-premises server(s) and the Azure File Share.
This “sync group” (one cloud endpoint, one or more server endpoints) creates a distributed, centrally-managed file service with local performance.
Why A (Azure Site Recovery – ASR) is Incorrect: ASR is a disaster recovery service. It replicates entire virtual machines (or physical servers) to Azure for business continuity. It does not create a synchronized, tiered file cache; it creates a dormant, replicated copy of a server for failover.
Why B (Storage Migration Service – SMS) is Incorrect: SMS is a tool for migrating file servers from an old server to a new server (which could be an on-premises server or an IaaS VM in Azure). Its purpose is a one-time (or multi-step) transfer and cutover. It does not create an ongoing, active synchronization or a tiered cache. You might use SMS to perform the initial migration into an Azure File Sync-enabled server, but it is not the sync service itself.
Why D (Distributed File System – DFS) is Incorrect: DFS (specifically DFS-Replication or DFS-R) is an on-premises technology for replicating file data between multiple on-premises servers. It does not have a native cloud endpoint or a concept of cloud tiering. It is a multi-master, peer-to-peer replication system. Azure File Sync is widely considered the modern, cloud-integrated replacement for DFS-R.
Question 135. You are managing a Windows Server 2022 failover cluster that hosts several critical file share roles. You need to apply security patches to the cluster nodes. During this maintenance, you must move a specific file share role, “FS-Role-01,” to a preferred node, “Node-03,” and ensure it does not automatically move back to any other node, even if “Node-03” is rebooted and rejoins the cluster. How should you configure the cluster role’s “failback” settings?
A) Set the role’s preferred owner to “Node-03” and enable “Allow Failback.”
B) Set the role’s preferred owner to “Node-03” and configure “Prevent Failback.”
C) Place all other nodes except “Node-03” in “Drained” status.
D) Set the role’s priority to “High” and its anti-affinity class name.
Correct Answer: B
Explanation:
The correct answer is B) To ensure a role stays on a specific node after it comes online, you must set that node as the preferred owner and prevent failback.
Why B (Set preferred owner to “Node-03” and configure “Prevent Failback”) is Correct: Understanding failover cluster settings is key here:
Preferred Owner: In Failover Cluster Manager, you can configure a “Preferred Owners” list for any clustered role. This list tells the cluster which node(s) the role should run on, in order. By setting “Node-03” as the first (or only) preferred owner, you are telling the cluster, “If this role has to start, try to start it on Node-03 first.”
Failback: Failback is the process that occurs after a failover.
Scenario: “FS-Role-01” is running on its preferred owner, “Node-03.” “Node-03” fails or is rebooted for patching. The cluster fails over the role to “Node-02.”
After Reboot: “Node-03” comes back online and rejoins the cluster.
With “Allow Failback” (the default): The cluster sees that “Node-03” (the preferred owner) is back online. It will then automatically fail back the role, meaning it will move “FS-Role-01” from “Node-02” back to “Node-03.” This causes a brief service interruption.
With “Prevent Failback”: The cluster sees that “Node-03” is back online, but the “Prevent Failback” setting tells it: “Do not automatically move the role back. Leave it running where it is (on Node-02) to avoid another service interruption.”
The question, however, has a nuance: “ensure it does not automatically move back to any other node.” This phrasing is slightly ambiguous, but in the context of maintenance, the common goal is to prevent the role from “flapping” (moving back and forth). Setting “Node-03” as the preferred owner and preventing failback ensures that if you manually move the role to “Node-03,” it will stay on “Node-03” (as it’s the preferred owner), and if “Node-03” fails and comes back, the role won’t automatically move back to it, preventing disruptive flapping.
However, a more direct interpretation of “ensure it does not automatically move back to any other node” (implying, it must stay on Node-03) is achieved by setting “Node-03” as the only possible owner, or by setting it as preferred and preventing failback, so it stays wherever it lands until manually moved. Given the options, setting the preferred owner and controlling the failback behavior is the correct mechanism. “Prevent Failback” is the setting that stops the cluster from automatically moving a role back to its preferred owner.
Let’s re-read the prompt: “ensure it does not automatically move back to any other node, even if “Node-03” is rebooted and rejoins the cluster.” This confirms the failback scenario. You move it to Node-03. You patch other nodes. Then you patch Node-03. It reboots. The role fails over to Node-02. Node-03 comes back. You want the role to stay on Node-02 (to avoid the flapping). Therefore, “Prevent Failback” is the correct setting.
Why A (Set preferred owner to “Node-03” and enable “Allow Failback”) is Incorrect: This is the default setting. This configuration would cause the role to automatically move back to “Node-03” after it reboots, which is precisely what the scenario wants to avoid (or at least, “Prevent Failback” is the explicit setting to control this behavior).
Why C (Place all other nodes except “Node-03” in “Drained” status) is Incorrect: Draining a node (or “Pause: Drain Roles”) proactively live-migrates roles off that node and prevents new roles from moving to it. If you drain all nodes except Node-03, then yes, “FS-Role-01” would be forced to run on “Node-03.” However, this is a temporary maintenance state. It is not a persistent configuration of the role itself. And if “Node-03” reboots, the role has nowhere to failover to, causing a complete outage. This is the wrong approach.
Why D (Set the role’s priority to “High” and its anti-affinity class name) is Incorrect: Priority (“High,” “Medium,” “Low”) determines the order in which roles are started up or failed over. Anti-affinity is used to ensure that two specific roles (e.g., two domain controllers) do not run on the same node at the same time. Neither of these settings controls the “stickiness” of a role to a specific node or its failback behavior.
Question 136. You are securing an on-premises Active Directory environment. A primary attack vector you want to mitigate is the use of NTLMv1, a deprecated and insecure authentication protocol. You need to audit your entire environment to identify which servers and applications are still making NTLM authentication requests. You decide to use a cloud-based service to ingest and analyze the security event logs from your domain controllers. Which hybrid security solution is designed to ingest these logs and provide analytics for identifying legacy protocol usage?
A) Azure Arc
B) Microsoft Defender for Identity (formerly Azure ATP)
C) Azure Site Recovery (ASR)
D) Credential Guard
Correct Answer: B
Explanation:
The correct answer is B, Microsoft Defender for Identity. This is a cloud-based security solution specifically designed to monitor on-premises Active Directory environments for threats, vulnerabilities, and misconfigurations, including the use of legacy protocols.
Why B (Microsoft Defender for Identity) is Correct: Microsoft Defender for Identity (MDI) is a key component of the Microsoft 365 Defender suite. It works by monitoring your on-premises Active Directory. This is typically done in two ways:
Sensor Deployment: You install the MDI sensor on your domain controllers (and AD FS servers).
Log Forwarding: The sensor captures and parses network traffic and Windows event logs (specifically the security logs related to authentication, like Event ID 4776 for NTLM).
Cloud Analytics: This data is sent to the MDI cloud service. MDI’s powerful analytics and machine learning engine analyzes these authentication requests from across the entire enterprise.
Reporting and Alerts: MDI provides rich reports, dashboards, and security alerts. It has a specific “Identity security posture” assessment that will explicitly call out “Legacy protocol usage (NTLMv1)” and “Clear text password exposure.” It can show you which users, which source devices, and which destination servers are involved in these NTLM requests.
This provides the exact auditing capability required to identify servers and applications still using NTLM.
Why A (Azure Arc) is Incorrect: Azure Arc is a management plane. It brings on-premises servers (via the Connected Machine agent) into Azure Resource Manager so you can manage them with Azure Policy, Monitor, etC) While you could use Azure Arc to deploy the MDI sensor or to collect event logs for Azure Monitor Sentinel, Azure Arc itself is not the analytics engine that “understands” NS-TLM authentication. Defender for Identity is the specialized service for this.
Why C (Azure Site Recovery – ASR) is Incorrect: ASR is a disaster recovery service for replicating virtual machines. It has no function related to analyzing authentication protocols or security logs.
Why D (Credential Guard) is Incorrect: Credential Guard is a preventative security feature for Windows Server and client. It uses virtualization-based security (VBS) to protect the LSASS process and prevent credential theft (like Pass-the-Hash). It is a hardening feature for a server, not an auditing or monitoring platform for the entire environment. It doesn’t report on NTLMv1 usage across the network; it just protects the credentials on the local machine where it’s enabled.
Question 137. You are responsible for a fleet of Windows Servers, some on-premises and some in Azure. You need a single, centralized solution to collect security events, performance data, and application logs from all servers. This solution must support a powerful, read-only query language for “threat hunting” and complex analysis, and it must integrate with automated remediation workflows. The on-premises servers have been onboarded using Azure ArC) What is the most appropriate primary service to use for this data aggregation and analysis?
A) System Insights
B) An Azure Automation account
C) An Azure Log Analytics workspace
D) Windows Server Update Services (WSUS)
Correct Answer: C
Explanation:
The correct answer is C, an Azure Log Analytics workspace. This is the foundational data platform for Azure Monitor and Microsoft Sentinel, designed for large-scale log aggregation and analysis from hybrid sources.
Why C (An Azure Log Analytics workspace) is Correct: An Azure Log Analytics workspace is the core component that meets all the requirements:
Centralized Solution: It is a cloud-based service designed to be the central repository for monitoring data from all sources.
Hybrid Collection: Through the Azure Arc connected machine agent (which deploys the Log Analytics agent, also called the Azure Monitor Agent), you can configure Data Collection Rules (DCRs) to forward Windows event logs (Security, Application, System), performance counters, and other logs from your on-premises servers directly to the workspace. Azure VMs can be configured just as easily.
Powerful Query Language: The workspace stores all this data, which can then be analyzed using the Kusto Query Language (KQL). KQL is an extremely powerful, read-only query language designed for “threat hunting” and sifting through terabytes of log data to find specific patterns, anomalies, or events.
Integration: The Log Analytics workspace is the backbone for other Azure services. You can trigger alerts based on KQL query results, and these alerts can then invoke an Azure Automation runbook or an Azure Logic App to perform automated remediation (e.g., “if this security event is seen, run a script to disable the user account”). It is also the data backend for Microsoft Sentinel (for security-specific analysis) and Azure Monitor (for performance and health).
Why A (System Insights) is Incorrect: System Insights is a local, on-premises feature that runs on a single server. It provides predictive analytics for that server only. It is not a centralized solution for aggregating logs from a fleet of servers.
Why B (An Azure Automation account) is Incorrect: An Azure Automation account is the service used for process automation (running PowerShell or Python runbooks), configuration management (Desired State Configuration), and update management. While it is used for the “automated remediation” part of the solution, it is not the data collection and analysis platform. It acts on data; it doesn’t store and query the raw logs.
Why D (Windows Server Update Services – WSUS) is Incorrect: WSUS is an on-premises service for managing the distribution and approval of Windows updates. It has no function related to collecting or analyzing security events or performance logs.
Question 138. You are configuring a new hyper-converged, 8-node Storage Spaces Direct (S2D) cluster using Windows Server 2022. The servers are connected via a redundant 25 GbE network using Switch Embedded Teaming (SET). To ensure the S2D (SMB) traffic has priority and receives a guaranteed amount of bandwidth, while also separating it from VM traffic and cluster management traffic, which technology should you configure on the host operating system’s network adapters?
A) Quality of Service (QoS) Policies
B) Data Center Bridging (DCB)
C) SMB Multichannel
D) Virtual Machine Queue (VMQ)
Correct Answer: A
Explanation:
The correct answer is A, Quality of Service (QoS) Policies. While DCB is related, it is configured on the physical switches to create a lossless fabric for RDMA) The host-level configuration to prioritize and allocate bandwidth to specific traffic types (like SMB) is done using QoS Policies.
Why A (Quality of Service – QoS – Policies) is Correct: In a converged or hyper-converged network, multiple traffic types (storage, VM, management, live migration) all share the same physical network adapters. To prevent one traffic type (like a large file copy from a VM) from “starving” a more critical traffic type (like an S2D storage write), you use Quality of Service (QoS).
In Windows Server, you can create QoS policies based on various criteriA) For S2D, you would typically create policies to:
Tag Traffic: Identify and tag the SMB traffic used by S2D (typically on port 445 or 5445) with a specific priority.
Allocate Bandwidth: You can set a bandwidth minimum (e.g., “guarantee at least 50% of the bandwidth to SMB”) or a bandwidth maximum (e.g., “VM traffic can never use more than 20%”).
This ensures that the latency-sensitive S2D storage traffic always has the network resources it needs, regardless of what other applications are doing. This is configured on the host OS using PowerShell cmdlets (e.g., New-NetQosPolicy).
Why B (Data Center Bridging – DCB) is Incorrect: DCB is a prerequisite for using RDMA over Converged Ethernet (RoCE). Its primary function is to provide Priority-based Flow Control (PFC), which makes the Ethernet fabric lossless by preventing packet drops. While you would configure DCB on the physical switches and the host adapters if you were using RoCE, the question is asking how to prioritize and guarantee bandwidth, which is the function of QoS (specifically, ETS – Enhanced Transmission Selection – which is part of the DCB suite, but “QoS Policies” is the more direct answer for the host-level configuration of bandwidth allocation). In a non-RDMA S2D setup (using TCP/IP), you would still use QoS policies for bandwidth management, even without DCB)
Why C (SMB Multichannel) is Incorrect: SMB Multichannel is a feature of SMB 3.0 that automatically aggregates multiple network paths between a client and server. For example, it will use both 25 GbE ports simultaneously to create a 50 Gbps aggregated link. It provides performance and resiliency, but it does not prioritize SMB traffic over other traffic types (like VM traffic) that might be sharing the same adapters.
Why D (Virtual Machine Queue – VMQ) is Incorrect: VMQ is a network adapter feature that helps improve performance for virtual machine traffiC) It creates separate hardware queues on the physical NIC for different VMs, and distributes the processing of that traffic across multiple CPU cores. It is for optimizing VM-to-network communication, not for managing host-level traffic priority between SMB and other protocols.
Question 139. Your organization is migrating a physical, legacy Windows Server 2008 R2 server to a new Hyper-V virtual machine running Windows Server 2022. The legacy server is not supported by the Storage Migration Service. You must perform a “Physical-to-Virtual” (P2V) migration. Which tool or process is the recommended method for performing a P2V conversion of a Windows Server?
A) Use the dism /capture-image command to create a WIM file.
B) Perform a “Windows Server Backup” of the physical server and restore it to a new VM.
C) Use the “Microsoft Virtual Machine Converter” (MVMC).
D) Use the “Disk2vhd” utility from the Sysinternals suite.
Correct Answer: D
Explanation:
The correct answer is D. Disk2vhd is a widely recognized and simple utility from the Sysinternals toolkit specifically designed to create a VHD (or VHDX) virtual disk image from a running physical system.
Why D (Use the “Disk2vhd” utility) is Correct: Disk2vhd is a lightweight, simple, and effective tool for P2V conversions.
Online Conversion: It can run on the live physical server while it is online. It uses Windows’ Volume Shadow Copy Service (VSS) to take a consistent, point-in-time snapshot of the system and data volumes.
Creates VHD/VHDX: It then streams this snapshot data into a new VHD or VHDX file (the virtual hard disk format used by Hyper-V).
Simple Process: Once the VHDX file is created, the P2V process is simple:
Copy the VHDX file to your Hyper-V host.
Create a new virtual machine in Hyper-V Manager.
When configuring the VM, instead of creating a new virtual disk, select “Use an existing virtual hard disk” and point it to the VHDX file you created.
Start the VM. (Note: You will likely need to remove old hardware drivers and install Hyper-V Integration Services).
This is the most direct and commonly used method for a simple P2V conversion, especially when more complex migration tools are not supported.
Why A (Use the dism /capture-image) is Incorrect: DISM (Deployment Image Servicing and Management) is used to capture a Windows Image (WIM) file. A WIM file is a file-based image used for OS deployment (installing Windows). It is not a block-level disk image. You cannot “boot” a VM from a WIM file; you can only install Windows from it. This would not migrate the server’s applications, data, or state.
Why B (Perform a “Windows Server Backup”) is Incorrect: Windows Server Backup creates a system-level backup. While you can perform a “bare-metal restore,” restoring a physical server backup to a virtual machine is not a supported or reliable P2V path. The restore process is designed to run on identical or very similar hardware and will almost certainly fail or have severe driver/HAL (Hardware Abstraction Layer) issues when trying to restore onto virtual hardware.
Why C (Use the “Microsoft Virtual Machine Converter” – MVMC) is Incorrect: The Microsoft Virtual Machine Converter (MVMC) was the official, recommended tool for P2V conversions. However, it was deprecated and retired in 2017. It is no longer supported and is not available for download. Therefore, it is not the current recommended method. Disk2vhd, while a “Sysinternals” tool, is the de facto successor for this task.
Question 140. You are an administrator for a company that runs a hybrid environment with Active Directory Domain Services (AD DS) on-premises and Azure Active Directory (Azure AD) in the cloud. You are using Azure AD Connect to synchronize user identities. The security team wants to ensure that if a user’s password is leaked in a public data breach, their account is proactively protected. You need to enable a feature that compares your users’ passwords (the synchronized hashes) against a global database of known leaked credentials and forces a password reset if a match is found. What is this Azure AD feature called?
A) Azure AD Privileged Identity Management (PIM)
B) Azure AD Password Protection
C) Azure AD Conditional Access
D) Azure AD Identity Protection
Correct Answer: B
Explanation:
The correct answer is B, Azure AD Password Protection. This service provides two key features: a global “banned” password list and a “custom” banned password list, and it specifically checks for leaked credentials.
Why B (Azure AD Password Protection) is Correct: Azure AD Password Protection is a feature designed to improve the strength of passwords in your organization. It has two primary capabilities:
Global and Custom Banned Passwords: It prevents users from setting weak or common passwords (like “Password123”). It checks against a global list of known weak passwords maintained by Microsoft, and you can add your own custom banned words (e.g., “Contoso,” “Q4-2025”). This applies to password changes in the cloud and on-premises (by deploying a proxy).
Leaked Credential Detection: This is the key part. This feature is integrated with Azure AD Identity Protection (which makes option D a very close distractor). However, the specific feature that “compares… passwords… against a global database of known leaked credentials” is the core functionality of Password Protection, which feeds risk data into Identity Protection. When Microsoft’s security researchers find new lists of breached credentials on the dark web, they add them to this database. Azure AD continuously compares your users’ password hashes against this database. If a match is found, the user’s “risk level” is elevated, and you can configure a policy (in Identity Protection) to force an immediate password reset.
The question asks for the feature that comRES the passwords, which is the Password Protection service.
Why A (Azure AD Privileged Identity Management – PIM) is Incorrect: PIM is a service for managing and monitoring privileged (administrator) roles. Its key features are “Just-in-Time” (JIT) access, where users must “activate” their admin role for a limited time, and “access reviews,” which require justification and approval for the role. It manages roles, not password strength or leaked credentials for all users.
Why C (Azure AD Conditional Access) is Incorrect: Conditional Access is the policy engine that enforces decisions. For example, a Conditional Access policy might say, “IF a user is high-risk (as determined by Identity Protection), THEN block access OR force MFA OR force a password reset.” Conditional Access is the action part, but it’s not the service that detects the leaked password in the first place.
Why D (Azure AD Identity Protection) is Incorrect: This is the most challenging distractor. Azure AD Identity Protection is the platform that consumes signals and reports on risk. It is the user interface where you see “Risky Users” and “Risk Detections.” One of these “risk detections” is “Leaked Credentials.” This risk signal is generated by the underlying Azure AD Password Protection service’s scanning feature. So, while you see the result in Identity Protection, the feature doing the comparison (as asked in the prompt) is Password Protection. In Microsoft’s documentation, these are often intertwined, but Password Protection is the specific technology for password-related hygiene and breached credential checking.