Microsoft AZ-801 Configuring Windows Server Hybrid Advanced Services Exam Dumps and Practice Test Questions Set2 Q21-40

Click here to access our full set of Microsoft AZ-801 exam dumps and practice tests.

Q21. Your organization has numerous file servers running Windows Server 2012 R2. You are tasked with migrating these file shares to new Windows Server 2022 VMs in Azure. The primary requirements are to migrate all file data, share permissions, and security configurations with minimal downtime. Crucially, you must also assume the identity (server name and IP) of the source servers during the cutover. Which hybrid service is designed for this end-to-end scenario?

A) Azure File Sync

B) Azure Migrate

C) Storage Migration Service (SMS)

D) Azure Backup

Answer: C

Explanation 

The correct answer is the Storage Migration Service (SMS). This tool, integrated within Windows Admin Center, is engineered specifically for migrating file server workloads from older Windows Server versions to newer ones, including Windows Server VMs in Azure. The scenario’s requirements—migrating files, share permissions (NTFS and SMB), and configurations—are the core competencies of SMS. The most distinguishing requirement, however, is the ability to assume the identity of the source server. SMS is unique among the options in its ability to perform a cutover stage where it takes over the source server’s name and IP address, effectively redirecting all clients to the new destination server without any client-side reconfiguration. This cutover phase is what facilitates the “minimal downtime” requirement. The process involves an inventory stage to assess source servers, a transfer stage to move the data, and the final cutover stage to impersonate the source. This comprehensive, identity-preserving migration makes it the ideal choice.

Option a, Azure File Sync, is incorrect for this primary migration scenario. Azure File Sync is a service for centralizing file shares in Azure Files and providing a fast, local cache on-premises via a Windows Server “server endpoint.” While it can be a destination for a migration (i.e., you could migrate files to an Azure File Sync server endpoint), it is not the migration tool itself. It does not perform an inventory, transfer, and cutover from an existing Windows Server in the way SMS does. Its primary purpose is synchronization and tiering, not a one-time lift-and-shift migration with identity takeover.

Option b, Azure Migrate, is a broad migration hub and is also incorrect for this specific use case. Azure Migrate’s primary tool for this type of workload is the “Server Migration” tool, which is designed for migrating entire virtual machines (Hyper-V or VMware) or physical servers to become Azure IaaS VMs. It performs a “lift-and-shift” of the entire server, not just the file share role. While you could migrate the whole VM, the scenario implies a modernization to new Windows Server 2022 VMs. Storage Migration Service is the purpose-built tool for file server workload migration, which is more granular and appropriate than migrating the entire legacy OS. SMS is the specialized tool for this role, whereas Azure Migrate is for the entire server.

Option d, Azure Backup, is fundamentally incorrect as it is a disaster recovery and data protection service, not a migration tool. Azure Backup is used to back up on-premises servers, Azure VMs, and Azure Files to a Recovery Services vault. You would use Azure Backup to protect the new file server after the migration is complete. It has no features for inventorying file shares, transferring data in a migration-aware context, or performing a cutover and identity takeover of a source server. Using it for migration would be a complex, manual process of backing up and restoring, which would not migrate share permissions or the server identity.

Q22. You are designing a high-availability solution for a new file share that will be hosted on-premises using Storage Spaces Direct (S2D). The failover cluster will consist of four nodes. You need to select a cluster witness to maintain quorum. The on-premises data center has a reliable internet connection but no other independent infrastructure. Which witness type provides the highest resilience for this hybrid scenario and does not require additional on-premises hardware?

A) Disk Witness

B) File Share Witness

C) Cloud Witness

D) Node Majority

Answer: C

Explanation 

The correct answer is the Cloud Witness. A Cloud Witness is a quorum witness type that utilizes an Azure Storage Account to store a small blob file, which acts as the “vote” in the cluster quorum. This is the ideal solution for the described scenario for several reasons. First, the prompt specifies a “hybrid scenario” and a “reliable internet connection,” both of which are prerequisites for a Cloud Witness. Second, it provides a high degree of resilience by decoupling the witness from the local data center’s infrastructure. If the on-premises data center were to experience a complete power failure or network partition affecting all four nodes, a Disk or File Share Witness located within that same data center would also fail, leading to a loss of quorum. The Cloud Witness, residing independently in Azure, would remain available, allowing the cluster to maintain quorum correctly during split-brain scenarios. Finally, it meets the constraint of “not requiring additional on-premises hardware” because it only requires an Azure subscription and a storage account, rather than a dedicated shared disk or a separate server to host a file share.

Option a, Disk Witness, is a suboptimal choice. A Disk Witness requires a dedicated shared disk (a small LUN) that is visible to all nodes in the cluster. This adds a dependency on a shared storage fabric (like iSCSI or Fibre Channel). While technically possible, it is an older model and, more importantly, it does not provide the same level of independent resilience as a Cloud Witness. If the storage array hosting the disk witness fails, or a site-wide disaster occurs, the witness is lost along with the cluster nodes. It also requires specific shared storage hardware, which the Cloud Witness avoids.

Option b, File Share Witness, is also not the best choice. This witness type requires a file share hosted on a separate server, typically a domain-joined Windows Server, that is not part of the cluster. This directly contradicts the requirement of “not requiring additional on-premises hardware,” as it necessitates provisioning and maintaining another server just to host the witness share. Furthermore, this witness is still subject to the same site-wide failures as the cluster itself. If it’s hosted in the same rack or data center, it offers no protection against a major outage.

Option d, Node Majority, is incorrect and would be automatically selected in this case if no witness were configured. A four-node cluster using Node Majority (where each node has a vote) has a total of four votes. This is an even number, which is highly discouraged for quorum configurations. In an even-vote cluster, a 50/50 split (two nodes failing) would cause the entire cluster to lose quorum and go offline. The purpose of a witness is to add a fifth vote, creating an odd number (five total votes) and allowing the cluster to sustain a failure of two nodes (since three votes would remain, constituting a majority). Therefore, “Node Majority” is not a witness type you select in this case; it’s the (flawed) configuration you get if you fail to add a witness.

Q23. Your company uses Microsoft Defender for Cloud to monitor the security posture of both Azure VMs and on-premises servers. An on-premises server, SRV-WEB-01, has been onboarded to Defender for Cloud via Azure Arc. A security administrator reports that Just-In-Time (JIT) VM access is not available for SRV-WEB-01, even though the enhanced security features (Microsoft Defender for Servers) are enabled. What is the most likely reason for this?

A) The server SRV-WEB-01 must be migrated to an Azure IaaS VM.

B) The administrator has not been assigned the Security Reader role for the server.

C) JIT VM access requires the server to be behind an Azure Firewall.

D) The on-premises server is not running the Azure Network Adapter.

Answer: A

Explanation 

The most likely reason is that SRV-WEB-01 is an on-premises server (managed by Azure Arc) and not an Azure IaaS VM. Just-In-Time (JIT) VM access is a feature of Microsoft Defender for Cloud that is specifically designed to lock down management ports (like RDP and SSH) on Azure IaaS virtual machines. It works by integrating directly with Azure’s Network Security Groups (NSGs). When a user requests JIT access, Defender for Cloud checks their Azure RBAC permissions and, if approved, creates a temporary “Allow” rule in the NSG for the user’s source IP address, valid for a limited time. This entire mechanism is contingent on the VM being a native Azure resource managed by the Azure Fabric and protected by an NSG. An on-premises server connected via Azure Arc, while visible in Defender for Cloud, does not have its network traffic managed by NSGs. Therefore, the JIT VM access feature is technically not applicable or available for Azure Arc-enabled servers.

Option b is incorrect. While permissions are necessary to configure or request JIT access (requiring a role like Contributor or a custom role with the correct permissions, not just Security Reader), the feature itself would still be visible or at least applicable to the resource if it were supported. The problem here is not a user’s permissions, but a fundamental incompatibility of the feature with the resource type (Azure Arc vs. Azure VM).

Option c is incorrect. JIT VM access does not have a hard dependency on Azure Firewall. It can be implemented with standard Network Security Groups (NSGs). While Azure Firewall can provide an additional, more advanced layer of protection (e.g., in a hub-spoke topology), it is not a prerequisite for enabling the JIT VM access feature on a supported Azure VM. The core requirement is an NSG.

Option d is incorrect. The “Azure Network Adapter” is a conceptual component related to how Azure IaaS VMs connect to the vNet, but it’s not a distinct, installable component on-premises. On-premises servers connect to Azure services via the public internet (or ExpressRoute/VPN) using their existing physical network adapters. The component that enables them to be managed by Azure is the Azure Arc Connected Machine agent, which does not provide the network-layer integration with NSGs that JIT access requires.

Q24. An administrator is configuring Azure Site Recovery (ASR) to replicate on-premises Hyper-V VMs to Azure. The administrator has already created a Recovery Services vault. Which of the following components must be deployed on-premises to discover, coordinate, and manage the replication and failover of the Hyper-V VMs?

A) An Azure Arc agent and a Log Analytics agent on each VM.

B) A Hyper-V replica broker and a Hyper-V recovery manager.

C) A Storage Migration Service orchestrator and a proxy.

D) The ASR provider on the Hyper-V hosts and the ASR agent on the VMs.

Answer: D

Explanation 

The correct answer is d. When setting up Azure Site Recovery (ASR) for on-premises Hyper-V VMs, a “push” replication model is used, which requires installing software components both on the Hyper-V hosts and, in some cases, on the guest virtual machines. Specifically, the Azure Site Recovery provider (also known as the provider) must be installed on each on-premises Hyper-V host (or cluster node) that hosts VMs you want to replicate. This provider registers the host with the Recovery Services vault and coordinates the replication process. Additionally, the Azure Site Recovery agent (also known as the agent or mobility service) is typically installed on the guest operating system of each virtual machine that is being replicated. This agent is responsible for capturing data changes within the VM and sending them to the replication appliance or directly to Azure, depending on the configuration. This two-part-agent-based architecture is fundamental to how ASR extends protection to on-premises Hyper-V environments.

Option a is incorrect. The Azure Arc agent is used for onboarding on-premises servers into Azure for management (like a control plane), and the Log Analytics agent is for collecting monitoring and log data for Azure Monitor. While these are common hybrid components, they are not part of the core ASR data-plane replication mechanism. ASR has its own dedicated agents (the provider and agent) for performing replication and failover.

Option b is incorrect. The “Hyper-V Replica Broker” is a role used in a Hyper-V Failover Cluster to provide a generic endpoint for replication between on-premises Hyper-V hosts (i.e., for Hyper-V Replica, not ASR). “Hyper-V Recovery Manager” is not a formal component name; the service is Azure Site Recovery. This option confuses the native Hyper-V Replica feature with the more advanced Azure Site Recovery service.

Option c is incorrect. The Storage Migration Service orchestrator and proxy are components used for migrating file servers, as detailed in a previous question. This service has no function related to the disaster recovery or replication of Hyper-V virtual machines. It is a tool for a completely different purpose (file server workload migration) and is not part of the ASR solution.

Q25. You are implementing a three-node failover cluster for a highly available file share. The cluster nodes are Node1, Node2, and Node3. You have successfully created the cluster. You need to ensure that if a user accesses the file share using the cluster’s network name, their connection is always directed to the node that currently owns the clustered file share role. Which cluster component must you configure?

A) A Cluster Shared Volume (CSV)

B) A floating IP address

C) A Scale-Out File Server (SOFS)

D) Cluster-Aware Updating (CAU)

Answer: B

Explanation 

The correct answer is a floating IP address. In a traditional failover cluster (as opposed to a Scale-Out File Server), a clustered role (like a “File Server for general use”) is active on only one node at a time. To ensure clients can always connect to this role, regardless of which node it is active on, the cluster uses a virtual IP address, often called a floating IP address. This IP address is a cluster resource that “floats” with the role. When the file share role is on Node1, Node1 owns and answers for that IP. If Node1 fails and the role is moved to Node2, the cluster automatically moves the IP address resource to Node2. Node2 then sends a gratuitous ARP to update the network, and all client traffic sent to that single, unchanging IP address is now routed to the newly active node. This provides the seamless connectivity abstraction that high availability requires. The cluster network name (the Client Access Point) is registered in DNS with this floating IP address.

Option a, Cluster Shared Volume (CSV), is incorrect. A CSV is a storage technology that allows all nodes in the cluster to have simultaneous read/write access to the same shared disk (LUN). This is a critical prerequisite for many clustered workloads (especially Hyper-V), as it allows the storage to be accessible no matter which node owns the VM. However, it is a storage-layer component and does not, by itself, handle the client-facing network redirection.

Option c, Scale-Out File Server (SOFS), is incorrect in this context. A SOFS is a different type of clustered file server designed for active-active access, where all nodes simultaneously serve file share datA) It is primarily used for application data (like Hyper-V VHDs or SQL Server databases). The scenario describes a general-purpose, highly available file share, which implies a traditional active-passive cluster with a “File Server for general use” role. This role uses a floating IP, whereas a SOFS uses different, more complex networking with SMB 3.0.

Option d, Cluster-Aware Updating (CAU), is incorrect. CAU is a management feature, not a networking component. It is a tool that automates the process of patching and rebooting cluster nodes one at a time, without taking the entire clustered workload offline. It gracefully drains roles from a node, patches it, reboots it, and then moves to the next node. It is essential for maintaining a cluster but is not the mechanism that directs client traffic.

Q26. A security administrator wants to prevent unsigned or unapproved executables from running on a set of on-premises Windows Server 2019 servers that handle sensitive datA) The administrator wants a solution that can be managed centrally, ideally using a hybrid approach. The servers are already onboarded with Azure Arc. Which technology should be implemented to enforce this?

A) Microsoft Defender for Identity

B) Windows Defender Application Control (WDAC)

C) Windows Defender Credential Guard

D) Just-In-Time (JIT) VM Access

Answer: B

Explanation 

The correct technology for this requirement is Windows Defender Application Control (WDAC). WDAC is a robust security feature that provides strict application control, moving away from the traditional model of “allow all, block known bad” to a “block all, allow known good” model. It allows administrators to create policies that specify exactly which applications and drivers are trusted to run on a system. These policies can be based on publisher certificates, file hashes, folder paths, or other attributes. By creating and enforcing a WDAC policy, the administrator can ensure that only signed and approved executables (those matching the policy) are allowed to run, effectively blocking all unsigned or unapproved code. This directly addresses the prompt’s requirement. Furthermore, WDAC policies can be deployed and managed centrally via Group Policy, System Center Configuration Manager (SCCM), or modern tools like Microsoft Intune, fitting the “managed centrally” requirement.

Option a, Microsoft Defender for Identity, is incorrect. Defender for Identity is a cloud-based security solution that focuses on protecting on-premises Active Directory. It monitors domain controller traffic, user behavior, and authentication requests to detect, and investigate advanced threats, identity-based attacks (like Pass-the-Hash), and malicious insider activity. It does not control or block application execution on member servers.

Option c, Windows Defender Credential Guard, is incorrect. Credential Guard is a virtualization-based security (VBS) feature that isolates and protects user credentials (specifically, NTLM and Kerberos derived credentials) in a secure, isolated LSA process. This prevents credential theft attacks like Pass-the-Hash and Pass-the-Ticket. While it is a critical hardening feature for Windows Server, its function is to protect credentials, not to control application execution.

Option d, Just-In-Time (JIT) VM Access, is incorrect. As discussed in a previous question, JIT VM Access is a Microsoft Defender for Cloud feature for Azure IaaS VMs that locks down management ports (RDP/SSH) by default. It opens these ports on-demand for authorized users. Its purpose is to reduce the network attack surface of management ports, not to control which executables can run inside the operating system.

Q27. You are migrating an on-premises physical server running Windows Server 2012 to an Azure IaaS VM. You plan to use the Azure Migrate: Server Migration tool. What component must be deployed on-premises to facilitate a “push” migration of this physical server to Azure?

A) The Azure Migrate appliance

B) A replication appliance

C) The Storage Migration Service agent

D) The Azure Arc Connected Machine agent

Answer: B

Explanation 

The correct answer is the replication appliance. When migrating on-premises physical servers (or VMware VMs) to Azure using Azure Migrate: Server Migration, you use an “agent-based” migration method. This method requires two on-premises components. The first is the Azure Migrate appliance, which is used for discovery and assessment. However, for the actual replication of a physical server, a second, separate machine called the “replication appliance” is required. This appliance (which also hosts the process server) is responsible for receiving replication data from the physical server’s Mobility service agent and “pushing” it to Azure. It coordinates, compresses, encrypts, and sends the data to a cache storage account in Azure. The prompt specifically asks what facilitates the “push migration,” which is the core function of this replication appliance.

Option a, the Azure Migrate appliance, is partially correct but incomplete and thus the wrong answer. The Azure Migrate appliance is required, but its primary role in a physical server migration is discovery and assessment. It identifies the server and analyzes its suitability for Azure. The replication itself, the “push” of data, is handled by the dedicated replication appliance. In the context of migrating Hyper-V VMs or using the agentless VMware method, the term “Azure Migrate appliance” is used more broadly, but for agent-based physical server migration, the “replication appliance” is the distinct component that handles the data plane.

Option c, the Storage Migration Service agent, is incorrect. The Storage Migration Service (SMS) is a completely different technology used for migrating file shares from old servers to new servers. It has no capability to migrate an entire physical server’s operating system, applications, and data into an Azure IaaS VM.

Option d, the Azure Arc Connected Machine agent, is incorrect. The Azure Arc agent is a management tool. It is used to onboard an existing on-premises server (physical or virtual) into the Azure control plane (Azure Arc) so it can be managed by services like Azure Policy, Defender, and Monitor. It is not a migration tool and has no capability to replicate or “push” a server’s disks to Azure to create a new IaaS VM. You would install the Arc agent after a server is in its final state, not to perform the migration itself.

Q28. Your company has a hybrid deployment with a central set of file shares on-premises. To improve access for remote branches, you implement Azure File Sync. You create a Storage Sync Service, a sync group, and an Azure file share as the cloud endpoint. You then install the Azure File Sync agent on a Windows Server 2019 at a branch office and add it as a server endpoint. You discover that only a small subset of the files (e.g., 20GB of 500GB) has been downloaded to the branch server. Users report files are “missing” but then appear after a delay when accessed. What feature is active, and how is it configured?

A) Cloud Tiering is enabled with a high volume-free-space policy.

B) The initial sync is still in progress and has pauseD)

C) Storage Migration Service has only transferred a subset of the datA)

D) A Network Security Group (NSG) is blocking the download of large files.

Answer: A

Explanation 

The behavior described is the hallmark of Cloud Tiering. Azure File Sync has two primary functions: to synchronize files between endpoints and (optionally) to tier files to save local disk space. When Cloud Tiering is enabled on a server endpoint, the Azure File Sync agent (specifically, the storagesync.sys file system filter driver) replaces the full file content on the local server with a “reparse point” or “pointer.” The full file remains in the Azure file share (the cloud endpoint). The local server only keeps a small subset of “hot” or recently accessed files cached locally. The “20GB of 500GB” being present locally is a strong indicator of this. The behavior where users see “missing” files that then “appear after a delay” is the “file recall” process: when a user accesses a tiered file (the reparse point), the agent seamlessly downloads the full file content from Azure. This recall process causes the perceived delay. This feature is typically configured with two policies: a volume-free-space policy (e.g., “keep 20% of the local volume free”) and an optional date policy (e.g., “tier files not accessed in 30 days”). A high volume-free-space policy (e.g., keep 80% free) would cause the agent to be very aggressive in tiering, leading to the behavior seen.

Option b is plausible but less likely to be the correct answer. During an initial sync, the server endpoint downloads the namespace (the file and folder list) first, which is why users might see the file structure but not the content. However, the specific behavior of files downloading on access (the recall) is the defining characteristic of Cloud Tiering, not a standard sync process.

Option c is incorrect. Storage Migration Service is a migration tool for moving file server workloads. It is not a component of Azure File Sync, which is a synchronization service. The two are distinct hybrid technologies.

Option d is incorrect. A Network Security Group (NSG) is an Azure networking component that filters traffic to and from Azure resources (like VMs). It would not be a component on the on-premises branch office server. Even if an on-premises firewall were misconfigured, it would typically block traffic entirely (e.g., ports 445 or 443), preventing the sync from working at all, rather than selectively allowing namespace sync but blocking file content until a user-initiated recall.

Q29. You need to collect specific Windows Event Logs and performance counters from your on-premises Windows Servers and analyze them in Azure Monitor Logs. These on-premises servers are not, and will not be, onboarded with Azure Arc. What is the most direct method to accomplish this data collection?

A) Install the Azure Monitor Agent (AMA) on the servers and configure data collection rules (DCRs) in the Azure portal.

B) Install the legacy Log Analytics agent (Microsoft Monitoring Agent) on the servers and configure data collection from the Log Analytics workspace.

C) Install the Storage Migration Service and point the log output to a Log Analytics workspace.

D) Install the Azure Site Recovery Mobility Service, which automatically forwards logs to Azure Monitor.

Answer: B

Explanation 

The most direct method, given the constraints, is to install the legacy Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA). The prompt specifies that the servers are not and will not be onboarded with Azure Arc. The modern Azure Monitor Agent (AMA), mentioned in option A, requires the Azure Arc agent to be installed on non-Azure machines to manage them and link them to data collection rules (DCRs). Since Azure Arc is explicitly disallowed, the AMA cannot be used. The legacy Log Analytics agent (MMA), however, was designed to report directly to a Log Analytics workspace without any other dependencies. After installing the MMA on the on-premises servers, you can configure data collection (for Windows Event Logs, performance counters, etc.) directly within the settings of the Log Analytics workspace itself. While this agent is considered “legacy” and is on a path to deprecation, it remains the correct technical answer for collecting data from non-Arc-enabled servers.

Option a is incorrect, as explained above. The Azure Monitor Agent (AMA) is the modern, preferred agent, but its use on on-premises servers is predicated on the installation of the Azure Arc agent. The prompt’s “will not be onboarded with Azure Arc” constraint makes this option technically infeasible.

Option c is incorrect. The Storage Migration Service (SMS) is a tool for migrating file servers. It has absolutely no function related to collecting or forwarding event logs or performance counters to Azure Monitor. It is an entirely unrelated service.

Option d is incorrect. The Azure Site Recovery (ASR) Mobility Service is the agent installed on servers to replicate their data to Azure for disaster recovery. While it is an agent, its sole purpose is data replication. It does not collect or forward Windows Event Logs or performance counters for analysis in Azure Monitor. That function belongs to the Log Analytics or Azure Monitor agents.

Q30. A new Windows Server 2022 Azure IaaS VM, VM1, was deployed from a standard marketplace image. A security policy requires that all data on the OS disk and any attached data disks be encrypted. You need to enable encryption using a key that is managed in an Azure Key Vault. What solution should you use?

A) Azure Disk Encryption (ADE) with a key encryption key (KEK).

B) Storage Service Encryption (SSE) with a platform-managed key (PMK).

C) Windows BitLocker, configured manually from within the VM’s operating system.

D) Microsoft Defender for Cloud with the “Encrypt disks” recommendation.

Answer: A

Explanation 

The correct solution is Azure Disk Encryption (ADE). ADE provides volume-level encryption for both OS and data disks of Azure IaaS VMs by leveraging the BitLocker feature (for Windows) or DM-Crypt (for Linux) within the virtual machine. This encrypts all data on the disks. The scenario adds a crucial requirement: using a key managed in an Azure Key Vault. ADE is designed for this exact purpose. It can be configured to wrap the BitLocker encryption keys using a Key Encryption Key (KEK) that is stored and managed in your Azure Key Vault. This gives you full control over the encryption keys, including their rotation and access policies, while the data itself is encrypted at the OS level. This combination (ADE + Key Vault + KEK) perfectly matches the requirements.

Option b, Storage Service Encryption (SSE) with a platform-managed key (PMK), is incorrect because it doesn’t meet all requirements. SSE is enabled by default on all Azure-managed disks. It encrypts data “at rest” in the Azure storage clusters, meaning the data is encrypted when written to the physical disks in the data center. A PMK means Microsoft manages the encryption keys. While this provides a baseline of security, it does not encrypt the data within the VM’s OS (BitLocker) and it does not meet the requirement of using a customer-managed key in Azure Key Vault. SSE can be used with customer-managed keys (CMK), but ADE is the feature that integrates with BitLocker for full OS-level encryption.

Option c is incorrect. While you could manually RDP into the VM and turn on BitLocker, this is not the “Azure-native” or recommended way to manage encryption for IaaS VMs. This manual method does not integrate with the Azure platform, makes key management difficult (the keys are stored within the VM or in AD), and cannot be easily automated, scaled, or audited through the Azure portal or CLI. Azure Disk Encryption is the official, integrated solution that automates this process and provides secure key management in Key Vault.

Option d is incorrect. Microsoft Defender for Cloud can identify that disks are unencrypted and recommend that you encrypt them. It might even provide a “Quick Fix” button. However, Defender for Cloud is the monitoring and recommendation service. The underlying technology that it would trigger to fix the issue is Azure Disk Encryption (ADE). Therefore, ADE is the “solution” itself, while Defender for Cloud is the management and security posture tool that reports on it.

Q31. Your organization has a two-node on-premises failover cluster running Hyper-V. You want to implement a disaster recovery solution that replicates these VMs to Azure. The primary goal is to have the lowest possible Recovery Point Objective (RPO) and enable orchestrated failover using recovery plans. Which Azure service should you configure?

A) Azure File Sync

B) Azure Backup

C) Hyper-V Replica

D) Azure Site Recovery (ASR)

Answer: D

Explanation 

The service designed for this exact scenario is Azure Site Recovery (ASR). ASR is a comprehensive disaster recovery (DR) solution that coordinates the replication, failover, and failback of on-premises workloads to Azure. For Hyper-V environments, ASR provides continuous or near-continuous replication, which directly addresses the goal of achieving the “lowest possible Recovery Point Objective (RPO).” Furthermore, ASR’s “Recovery Plans” are a key feature that allows you to orchestrate the entire failover process. A recovery plan can group multiple VMs together (e.g., an application server and its database server), define their startup order, and even include custom scripts or manual actions, ensuring the entire application comes online gracefully in Azure. This combination of low-RPO replication and orchestrated failover is ASR’s core value proposition.

Option a, Azure File Sync, is incorrect. Azure File Sync is a service for synchronizing file share data, not for replicating entire Hyper-V virtual machines. It operates at the file level, not the block level of a VHD.

Option b, Azure Backup, is a data protection service, not a DR service. While you can use Azure Backup to back up on-premises Hyper-V VMs to a Recovery Services vault (the same vault ASR uses), its primary purpose is “point-in-time” recovery (e.g., restore a VM from 3 days ago). It does not provide the continuous replication needed for a low RPO, nor does it offer the complex orchestrated failover capabilities of ASR’s recovery plans. Backup is about data survival; ASR is about service continuity.

Option c, Hyper-V Replica, is a native feature of Hyper-V that provides asynchronous replication of a VM from one Hyper-V host to another (the “replica server”). While it can be used for DR, it is typically used for on-premises-to-on-premises replication. It does not natively replicate to Azure. It also lacks the centralized management, recovery plan orchestration, and non-disruptive DR testing capabilities that ASR provides. ASR is, in effect, the “enterprise-grade, hybrid-cloud” evolution of the basic Hyper-V Replica concept.

Q32. You are managing a large-scale Windows Server 2022 environment. You want to implement a solution that uses on-premises domain controllers to detect and investigate advanced identity-based threats, such as Pass-the-Hash and Golden Ticket attacks. The solution must integrate with a cloud-based service for analysis and reporting. What should you deploy?

A) Microsoft Defender for Cloud

B) Microsoft Sentinel

C) Microsoft Defender for Identity

D) Windows Defender Application Control (WDAC)

Answer: C

Explanation 

The correct solution is Microsoft Defender for Identity. This service is specifically designed to address the requirement of protecting an on-premises Active Directory environment from identity-based attacks. Defender for Identity works by installing a “sensor” on your on-premises domain controllers. This sensor monitors network traffic, authentication requests (Kerberos and NTLM), and Active Directory events in real-time, without being intrusive. It sends this data to the Defender for Identity cloud service for analysis. The cloud service uses machine learning and behavioral analytics to build a profile of normal user and device activity. It can then detect anomalies and known attack patterns, such as Pass-the-Hash, Pass-the-Ticket, Golden Ticket attacks, and reconnaissance, which are exactly what the prompt asks for. This combination of on-premises sensors and cloud-based analytics makes it the perfect tool for this scenario.

Option a, Microsoft Defender for Cloud, is incorrect in this context. Defender for Cloud is a broad cloud security posture management (CSPM) and workload protection (CWPP) solution. While its “Microsoft Defender for Servers” plan protects the servers themselves, and it can integrate alerts from other services, its primary focus is not the deep, real-time analysis of Active Directory authentication protocols. Defender for Identity is the specialized tool for that.

Option b, Microsoft Sentinel, is a cloud-native SIEM (Security Information and Event Management) and SOAR (Security Orchestration, Automation, and Response) solution. It is a consumer of security signals. You would absolutely forward the alerts from Microsoft Defender for Identity to Microsoft Sentinel for correlation with other logs (e.g., firewall, server logs) and for incident management. However, Sentinel is not the tool that generates the AD-specific threat detections. Defender for Identity is the producer; Sentinel is the aggregator and correlator.

Option d, Windows Defender Application Control (WDAC), is incorrect. WDAC is an application whitelisting technology. Its purpose is to control which executables and drivers are allowed to run on a server. It has no capabilities for monitoring network authentication protocols or detecting identity-based attacks.

Q33. A Windows Server 2022 VM is running in Azure. You need to enable diagnostic logging for the VM to troubleshoot boot failures. You want to view the serial log output and see screenshots of the VM’s state. Which Azure feature should you enable and use?

A) VM Insights in Azure Monitor

B) The Log Analytics agent

C) Boot diagnostics

D) Azure Network Watcher

Answer: C

Explanation 

The feature designed for this specific purpose is Boot diagnostics. Boot diagnostics is an Azure IaaS VM feature that collects and stores information about the VM’s boot process. It captures two critical pieces of information: the serial log output from the VM’s console and screenshots of the VM’s display. This is invaluable for troubleshooting “blue screen” (BSOD) errors, boot-time “Operating system not found” messages, or any other issue that prevents the OS from loading correctly. When you enable boot diagnostics, you specify a managed storage account where this information will be stored. You can then view the serial log and the latest screenshot directly from the VM’s blade in the Azure portal, allowing you to diagnose the failure without even being able to RDP to the machine.

Option a, VM Insights in Azure Monitor, is incorrect. VM Insights is a powerful monitoring solution that provides in-depth performance and dependency mapping for a running operating system. It relies on the Azure Monitor Agent and Log Analytics to collect data from within the guest OS. If the VM is failing to boot, VM Insights will not be able to collect any data, as its agents will not be running.

Option b, the Log Analytics agent, is incorrect for the same reason. The agent (whether the legacy MMA or the modern AMA) is a service that runs inside the guest OS. It collects event logs and performance counters. If the OS cannot boot, the agent cannot run, and therefore cannot send any diagnostic datA) Boot diagnostics, in contrast, operates at the Azure platform level, capturing the VM’s console output externally.

Option d, Azure Network Watcher, is incorrect. Network Watcher is a suite of tools for monitoring, diagnosing, and managing network-related issues in Azure. You would use it to troubleshoot NSG rules, VPN connectivity, or packet routing. It has no capability to view a VM’s console output or take screenshots to troubleshoot an OS boot failure.

Q34. Your company has a two-node on-premises failover cluster configured with Storage Spaces Direct (S2D). One of the nodes, S2D-Node1, experiences a hardware failure on its network adapter and is offline. A replacement node, S2D-Node-New, has been provisioned with Windows Server 2022 and has identical hardware. You have added the new node to the cluster. What is the next step to fully integrate S2D-Node-New into the S2D storage pool and retire S2D-Node1?

A) Run Repair-StoragePool -Name S2D* to rebuild the data on the new node.

B) Run Remove-ClusterNode -Name S2D-Node1 and then Add-PhysicalDisk -Node S2D-Node-New.

C) Run Remove-ClusterS2DDisk -Node S2D-Node1 and then Enable-ClusterS2D.

D) Run Set-PhysicalDisk -Node S2D-Node-New -Usage AutoSelect and then remove S2D-Node1.

Answer: B

Explanation 

The correct sequence of operations is represented by option B) When a node in a Storage Spaces Direct (S2D) cluster fails and needs to be replaced, you must formally evict the old node and then add the new node’s disks to the pool. The Remove-ClusterNode -Name S2D-Node1 command is the first critical step. This command properly evicts the failed S2D-Node1 from the cluster, which also triggers S2D to release the physical disks that were associated with that node. After the new node (S2D-Node-New) has been added to the cluster (a step the prompt states is complete), its local disks are not automatically part of the S2D storage pool. You must manually add them. The Add-PhysicalDisk cmdlet, or more commonly just allowing S2D’s auto-pooling to claim them (which is enabled by default on new S2D clusters), would be the next step. However, the most critical part of the process is the removal of the old node, which is what option b correctly identifies as the primary action before the new node’s resources can be fully integrated. Remove-ClusterNode is the correct PowerShell cmdlet for this.

Option a is incorrect. Repair-StoragePool is a cmdlet used to repair a virtual disk (Storage Space) that is in a degraded state, for example, after a disk failure. It initiates the re-silvering process. However, it does not add a new node or its disks to the cluster. This command would be run after the new node’s disks are in the pool to re-establish full redundancy.

Option c is incorrect. Remove-ClusterS2DDisk is not a standard cmdlet. The process is not about removing S2D disks individually but about removing the node that owns them. Enable-ClusterS2D is the cmdlet used to initially create the S2D pool on a new cluster, not to add a node to an existing one.

Option d is incorrect. Set-PhysicalDisk is used to modify properties of a disk, but it’s not the primary command to add a new node’s disks to the S2D pool. The process is orchestrated at the cluster node level, not the individual disk level, when replacing a whole node. The eviction of the old node (Remove-ClusterNode) is the essential, missing first step in this option.

Q35. You are implementing a hybrid update solution using Windows Admin Center. You want to manage updates for your on-premises servers and your Azure IaaS VMs from a single interface. You have connected your Windows Admin Center gateway to Azure. Which Azure service must be integrated with Windows Admin Center to provide this unified update management capability?

A) Azure Site Recovery

B) Azure Arc

C) Azure Update Manager (v2)

D) System Center Configuration Manager (SCCM)

Answer: C

Explanation 

The correct answer is Azure Update Manager (v2). Windows Admin Center (WAC) provides a first-party integration with Azure for managing updates. While WAC has its own built-in “Updates” tool for managing individual servers, to get a unified, at-scale view and management capability across both on-premises and Azure IaaS VMs, you integrate it with Azure Update Manager. Azure Update Manager is the Azure-native, centralized service for assessing and deploying updates to all your machines. By integrating WAC with Azure, you can onboard your on-premises servers (via Azure Arc) into Azure Update Manager. Once onboarded, both your Azure IaaS VMs and your Arc-enabled on-premises servers are visible in the Azure Update Manager interface, allowing you to create schedules, deploy updates, and view compliance reports from a single pane of glass in the Azure portal. WAC facilitates the onboarding of the on-premises servers to this service.

Option a, Azure Site Recovery, is incorrect. ASR is a disaster recovery service for replicating VMs. It has no function related to Windows Update management.

Option b, Azure Arc, is a critical enabler but not the solution itself. You must use Azure Arc to onboard your on-‘premises servers so that Azure Update Manager can “see” and “manage” them. However, Arc is the control plane; Azure Update Manager is the specific service that performs the update management. The question asks for the service that provides the “unified update management capability,” which is Azure Update Manager.

Option d, System Center Configuration Manager (SCCM), is incorrect. SCCM (now part of Microsoft Intune) is a comprehensive on-premises management solution that also does update management (WSUS integration). While it can be co-managed with Azure, it is a separate, complex infrastructure. Windows Admin Center’s hybrid update integration is with the lightweight, Azure-native Azure Update Manager, not the full SCCM suite.

Q36. You need to migrate a file server’s data from an on-premises NetApp FAS array to a new Windows Server 2022 VM in Azure. The migration must include all files, folders, and NTFS permissions. You will be using the Storage Migration Service (SMS) orchestrated from Windows Admin Center. What must be installed on the SMS orchestrator server to allow it to migrate from a NetApp source?

A) The Azure File Sync agent

B) The NetApp migration proxy service

C) The Storage Migration Service NetApp provider

D) The Azure Arc agent

Answer: C

Explanation 

The correct answer is the Storage Migration Service NetApp provider. The Storage Migration Service (SMS) is designed to be extensible. By default, it can migrate from Windows Server and Samba/Linux sources. To migrate from third-party storage appliances, such as NetApp, you must install a specific “provider” that teaches SMS how to communicate with that source. NetApp has developed a provider that integrates with SMS in Windows Admin Center. This provider must be installed on the Storage Migration Service orchestrator server. Once installed, it allows the orchestrator to connect to the NetApp FAS array, inventory its shares, and transfer data and permissions (mapping NetApp permissions to Windows NTFS permissions) to the new destination server. This provider is the essential component that enables communication between SMS and the non-Windows NetApp source.

Option a, the Azure File Sync agent, is incorrect. Azure File Sync is a service for synchronizing data with an Azure file share. It is not part of the Storage Migration Service and is not required on the orchestrator to read from a NetApp device.

Option b is a plausible-sounding but incorrect term. The component is officially referred to as a “provider,” not a “migration proxy service.” The SMS orchestrator itself acts as the proxy for the migration, and the “provider” is the software plugin that enables it to talk to the NetApp source.

Option d, the Azure Arc agent, is incorrect. The Azure Arc agent is used to onboard servers into Azure’s management plane. It has no function related to the Storage Migration Service’s ability to communicate with a NetApp array. You might install the Arc agent on the destination server in Azure, but it is not required on the orchestrator for this specific purpose.

Q37. A new security policy mandates that all RDP access to a group of Windows Server 2022 IaaS VMs in Azure must be restricted. Access should only be allowed on-demand, for a limited time, from a user’s specific source IP, and must be logged and auditable. Which feature of Microsoft Defender for Cloud should be implemented?

A) Adaptive Application Controls

B) Just-In-Time (JIT) VM Access

C) Azure Disk Encryption (ADE)

D) Network Security Group (NSG) logging

Answer: B

Explanation 

The solution that perfectly matches all the requirements is Just-In-Time (JIT) VM Access. JIT is a feature within Microsoft Defender for Cloud (specifically, the Defender for Servers plan) that is designed to mitigate the risk of open management ports (like RDP 3389). When JIT is enabled on a VM, it configures the Network Security Group (NSG) to Deny all inbound traffic to those ports by default. When an authorized user needs to connect, they must go to the Azure portal (or use PowerShell/API) to request access. This request is logged. Defender for Cloud then checks the user’s Azure RBAC permissions. If authorized, JIT dynamically creates a temporary Allow rule in the NSG, specifying the user’s source IP address and a limited time window (e.g., 3 hours). This directly meets all the prompt’s requirements: on-demand, limited time, specific source IP, and logged/auditable.

Option a, Adaptive Application Controls, is incorrect. This is an application whitelisting feature. It analyzes a VM to create a baseline of known-safe applications and then creates rules (based on Windows Defender Application Control) to block any unapproved executables from running. It controls what runs inside the OS, not network access to the OS.

Option c, Azure Disk Encryption (ADE), is incorrect. ADE is a feature for encrypting the OS and data disks of a VM to protect data at rest. It has no control over network access or RDP connections.

Option d, Network Security Group (NSG) logging, is incorrect. NSG logging is a feature that records all traffic that is allowed or denied by an NSG. While this provides auditing (one of the requirements), it is not the solution itself. You could log all the failed RDP attempts on a permanently open port, but that doesn’t solve the security problem. JIT is the control mechanism that enforces the policy, and it also generates logs as part of its operation.

Q38. You are monitoring your hybrid environment using Azure Monitor. You have several on-premises servers connected via Azure Arc and several Azure IaaS VMs. You want to get a comprehensive view of the performance and health of all these machines, including a map of the network dependencies and running processes. Which specific Azure Monitor solution should you enable?

A) Boot Diagnostics

B) VM Insights

C) Microsoft Sentinel

D) Network Watcher

Answer: B

Explanation 

The solution that provides this exact set of capabilities is VM Insights. VM Insights is a feature within Azure Monitor that is designed to provide deep, “inside-the-box” monitoring for virtual machines. It has two main components. The first is the Health feature, which monitors the guest OS for performance issues (e.g., CPU, memory, disk) and health states. The second, and more distinguishing, feature is the Map taB) The Map feature, powered by the Dependency agent, discovers and visualizes all the running processes on the VM and the network dependencies (TCP connections) between that VM and other machines. This “map of network dependencies and running processes” is exactly what the prompt asks for. VM Insights can be deployed to both Azure IaaS VMs and on-premises servers (via Azure Arc), providing the unified view the user wants.

Option a, Boot Diagnostics, is incorrect. Boot diagnostics is a troubleshooting tool used to view serial console logs and screenshots for VMs that are failing to boot. It provides no performance monitoring or dependency mapping for a running OS.

Option c, Microsoft Sentinel, is incorrect. Sentinel is a SIEM/SOAR tool. It collects security logs and alerts to find and respond to threats. It does not provide performance health monitoring or process-level dependency mapping in the way VM Insights does. You would send alerts to Sentinel, but it’s not the primary performance monitoring tool.

Option d, Network Watcher, is incorrect. Network Watcher is a suite of tools for troubleshooting the Azure network (the vNets, NSGs, routes, etC)). It operates at the network fabric level. It does not have agents that run inside the guest OS to discover running processes or their dependencies. VM Insights (Map) provides the in-guest view, while Network Watcher provides the network infrastructure view.

Q39. You are implementing a three-node Storage Spaces Direct (S2D) cluster. A networking consultant has advised you to use Network ATC to simplify and enforce an intent-based networking configuration for the cluster. Which of the following is a primary benefit of using Network ATC?

A) It automatically migrates file shares from old servers to the S2D cluster.

B) It automatically configBures Hyper-V Replica for disaster recovery.

C) It automates the deployment and configuration of all cluster networking, including virtual switches and adapter settings.

D) It provides Just-In-Time (JIT) access to the cluster nodes.

Answer: C

Explanation 

The primary benefit of Network ATC (Network “Abstract T-Shirt” Configuration) is that it automates and standardizes the complex task of configuring networking for an Azure Stack HCI or S2D cluster. Setting up S2D networking (e.g., for storage, management, and VM traffic) is notoriously complex, requiring the creation of vSwitches (SETs), vNICs, and the configuration of Data Center Bridging (DCB), QoS, and adapter properties. Network ATC simplifies this by allowing an administrator to declare an “intent” (e.g., “this cluster will have a ‘storage’ network and a ‘management’ network”). Network ATC then takes over and automatically deploys the vSwitches, configures the adapters (including RDMA for S2D), and ensures the configuration is identical and compliant across all nodes in the cluster. It “automates the deployment and configuration of all cluster networking,” which is precisely what option c states.

Option a is incorrect. The tool for migrating file shares is the Storage Migration Service (SMS), not Network ATC. Network ATC is purely a network configuration tool.

Option b is incorrect. Hyper-V Replica is a disaster recovery feature. While it uses the network, Network ATC does not configure it. Network ATC configures the underlying network plumbing for the cluster, not the specific applications or roles that run on it.

Option d is incorrect. Just-In-Time (JIT) access is a network security feature of Microsoft Defender for Cloud, used to lock down management ports. It is completely unrelated to the initial deployment and configuration of cluster networking that Network ATC performs.

Q40. You are using Azure File Sync to centralize your company’s file shares. An on-premises server endpoint is configured with Cloud Tiering. A user accidentally deletes a file from the on-premises server that had been tiered (only a reparse point existed locally). The file now appears to be gone from all server endpoints and the Azure file share. Where should you look to recover this deleted file?

A) The on-premises server’s Recycle Bin.

B) The Azure file share’s “soft delete” feature.

C) The Storage Migration Service’s version history.

D) The VSS shadow copies on the server endpoint.

Answer: B

Explanation 

The correct place to recover the file is from the Azure file share’s “soft delete” feature. When a file is deleted from any endpoint in a sync group (in this case, the on-premises server), Azure File Sync synchronizes that deletion to all other endpoints, including the cloud endpoint (the Azure file share). This is why the file “appears to be gone from everywhere.” However, this deletion is a destructive action. To protect against accidental deletions, the Azure file share itself has a “soft delete” feature. When enabled, a file that is “deleted” is not immediately, permanently purged. Instead, it is transitioned to a “soft-deleted” state and retained for a configurable period (e.g., 7 days). During this time, the file is not visible in the file share, but it can be “undeleted” from the Azure portal. This is the primary recovery mechanism for this specific scenario.

Option a is incorrect. When a file is deleted (especially via a network share or if it’s a tiered file), it does not go to the server’s local Recycle Bin. The deletion is a file system operation that is processed by the sync agent, which bypasses the Recycle Bin.

Option c is incorrect. The Storage Migration Service is a migration tool and is not part of the Azure File Sync architecture or its recovery process.

Option d is incorrect. VSS (Volume Shadow Copy Service) shadow copies can be used with Azure File Sync, but there’s a nuance. If the file was tiered, the reparse point (the 0-byte pointer) might be in the VSS snapshot, but the data would not be. The primary and most reliable method, which works regardless of the file’s tiered status, is the soft delete feature on the Azure file share itself, which is the “source of truth” in the sync topology.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!