Click here to access our full set of Microsoft AZ-801 exam dumps and practice tests.
Q61. You are designing a high-availability solution for a new, critical on-premises SQL Server workloaD) The solution will use a two-node Windows Server 2022 failover cluster. The nodes are located in the same datacenter. You need to configure a cluster witness to maintain quorum. You do not have any shared storage (iSCSI or Fibre Channel), and you do not want to provision another separate Windows Server just to host a file share. You do, however, have a stable internet connection and an Azure subscription. Which witness type is the most appropriate and resilient for this scenario?
A) Disk Witness
B) File Share Witness
C) Cloud Witness
D) Node Majority
Answer: C
Explanation
The correct answer is Cloud Witness. A Cloud Witness is a type of quorum witness for a failover cluster that leverages Microsoft Azure Blob Storage. It stores a small blob file in an Azure Storage Account, and this blob is used as a voting element in the cluster’s quorum calculations. This is the perfect solution for this scenario. The prompt explicitly rules out a Disk Witness (option a) by stating there is “no shared storage.” It also rules out a File Share Witness (option b) by stating a desire to “not provision another separate Windows Server.” The cluster has a “stable internet connection and an Azure subscription,” which are the only prerequisites for a Cloud Witness. It provides greater resilience than an on-premises witness because it is hosted in a separate failure domain (the Azure datacenter), protecting the cluster against a site-wide power or network failure in the on-premises datacenter. Option d, Node Majority, is not a witness type but a quorum configuration. It would be the default for a cluster with an odd number of nodes. For a two-node cluster, Node Majority is not recommended because the failure of a single node would leave only one vote remaining, which is not a majority, causing the entire cluster to go offline. A witness is required for a two-node cluster to be resilient to a single-node failure, as it provides the critical third vote.
Q62. You are managing a four-node Storage Spaces Direct (S2D) cluster running on Windows Server 2019. You need to apply monthly security patches to all nodes in the cluster with the least possible disruption to the running virtual machine workloads. The solution must be automated and should drain roles, patch, reboot, and resume roles for one node at a time. Which feature should you configure and use?
A) Windows Server Update Services (WSUS)
B) Azure Update Manager
C) Cluster-Aware Updating (CAU)
D) Storage Migration Service (SMS)
Answer: C
Explanation
The correct answer is Cluster-Aware Updating (CAU). This feature is purpose-built for the exact scenario describeD) CAU is a feature integrated with Windows Server Failover Clustering that automates the process of applying software updates to all nodes in a cluster while maintaining service availability. When CAU performs an “Updating Run,” it selects one node at a time, places it into maintenance mode, and drains all running roles (like virtual machines) from it, live-migrating them to other nodes in the cluster. Once the node is empty, CAU applies the updates, reboots the node if required, brings it back online, and resumes its cluster membership. It then repeats this process for every other node in the cluster, one by one. This ensures that the clustered workloads remain online and highly available throughout the entire patching cycle. Option a, Windows Server Update Services (WSUS), is a repository and management tool for updates, but it is not “cluster-aware.” It cannot orchestrate the draining and live-migration of roles. You can, and often do, use WSUS as the source for the updates that CAU applies, but CAU is the automation engine that performs the action. Option b, Azure Update Manager, is a service for managing updates across Azure VMs and Azure Arc-enabled servers. While it can patch cluster nodes, it is not as deeply integrated with the cluster’s internal state (like draining roles) as the native CAU feature. CAU is the preferred and most robust method for a failover cluster. Option d, Storage Migration Service (SMS), is entirely incorrect. SMS is a tool for migrating file servers from old servers to new ones; it has no function related to patching or updating.
Q63. You need to migrate an aging Windows Server 2012 R2 file server to a new Windows Server 2022 virtual machine in Azure. The migration must include all files, folders, share permissions, and NTFS permissions. A key requirement is to minimize user disruption by performing an in-place cutover that transfers the original server’s name and IP address to the new Azure VM. Which tool is designed to manage this entire end-to-end process, including the cutover?
A) Azure File Sync
B) Azure Site Recovery (ASR)
C) Robocopy
D) Storage Migration Service (SMS)
Answer: D
Explanation
The correct answer is Storage Migration Service (SMS). SMS is a technology included in Windows Server and managed via Windows Admin Center. It is designed specifically for this use case: migrating file server workloads from older Windows Servers (or even non-Windows sources) to newer Windows Servers, including those running in Azure. The SMS process is a three-step workflow: 1) Inventory servers to find their file shares and configurations. 2) Transfer data, copying all files, folders, and, crucially, all share-level and NTFS-level permissions. 3) Cut over to the new server. This cutover stage is the key differentiator. During cutover, SMS dismounts the shares on the source server, performs a final synchronization, and then assumes the source server’s identity (its network name and IP addresses). This redirects all client connections to the new server seamlessly, without requiring any reconfiguration on the user’s end, fulfilling the requirement to “transfer the original server’s name and IP address.” Option a, Azure File Sync, is a synchronization service, not a migration tool. You might migrate to a server running Azure File Sync, but it doesn’t perform the migration and cutover. Option b, Azure Site Recovery (ASR), is a disaster recovery service for replicating entire VMs. It is not designed for a granular file server workload migration and does not handle the specific share-level migration and identity cutover in the same way. Option c, Robocopy, is a command-line file-copying tool. While it is excellent at copying files and permissions (with the right switches), it is just a tool. It is not an end-to-end service. It cannot perform the inventory, and it cannot, by itself, perform the automated network identity cutover.
Q64. You have a Windows Server 2022 Azure IaaS VM. You are unable to connect to the VM using Remote Desktop Protocol (RDP). You suspect a boot-time error, such as a “blue screen” or a driver failure, is preventing the operating system from loading correctly. You need to view the VM’s console output and a screenshot of its current state as seen by the hypervisor. Which Azure feature should you use?
A) Boot diagnostics
B) Azure Network Watcher
C) VM insights (Azure Monitor)
D) The serial console (SAC)
Answer: A
Explanation
The correct answer is Boot diagnostics. This feature is specifically designed to troubleshoot VM boot failures. When enabled (which is the default for most marketplace images), Boot diagnostics captures two critical pieces of information from the underlying Azure hypervisor: 1) A screenshot of the VM’s console, which is invaluable for seeing errors like a “blue screen of death” (BSOD) or other OS-level startup messages. 2) The serial log, which captures text-based output from the VM’s serial port (COM1). This log often contains detailed driver and service loading messages that can help pinpoint the exact cause of a boot failure. You can access both the log and the screenshot from the VM’s blade in the Azure portal under the “Support + troubleshooting” section. Option b, Azure Network Watcher, is a suite of tools for troubleshooting network connectivity to a VM (like NSG rules or routing), but it cannot see inside the VM to diagnose an OS boot failure. Option c, VM insights, is a feature of Azure Monitor that uses an agent installed inside the OS to collect performance and dependency datA) If the OS is not booting, the agent is not running, and VM insights will be unable to collect any datA) Option d, The serial console (SAC), is a related but different feature. The serial console provides interactive command-line access to the VM’s serial port. While extremely useful, it is primarily for interacting with a partially-booted OS (e.g., via the Special Administration Console). “Boot diagnostics” is the feature you use to view the passive log and screenshot of a failed boot.
Q65. You are managing a hybrid environment with numerous on-premises Windows Servers onboarded to Azure ArC) You need to deploy a specific set of security configurations, such as enforcing BitLocker and auditing for specific registry keys, across all these Arc-enabled servers. You want to enforce this configuration and automatically remediate any servers that drift from the desired state. Which Azure service should you use?
A) Azure Automation
B) Azure Policy
C) Microsoft Defender for Cloud
D) Azure Update Manager
Answer: B
Explanation
The correct answer is Azure Policy. Azure Policy is the Azure-native service for governance at scale. It allows you to define and enforce policies that control or audit your Azure resources. Critically, through Azure Arc, this capability is extended to your on-premises servers. You can assign policy definitions (e.g., “Audit if BitLocker is not enabled”) to your Arc-enabled servers. For many policies, you can also create a remediation task that will automatically “fix” non-compliant resources, such as by triggering the installation of an extension or a script to enable BitLocker. This directly meets the requirement to “enforce this configuration and automatically remediate” non-compliant servers. Option a, Azure Automation, is a service for process automation using runbooks (PowerShell or Python). While you could write a runbook to check for BitLocker and enable it, it is not a “policy” or “governance” service. It does not provide the at-a-glance compliance dashboard or the declarative “desired state” model that Azure Policy does. Azure Policy can even trigger an Automation runbook as its remediation step, but Policy is the correct service for the enforcement and compliance auditing. Option c, Microsoft Defender for Cloud, is a security posture management tool. It uses Azure Policy heavily on the backend to surface security recommendations (e.g., “You have servers without BitLocker”). It is the dashboard that reports on the security-related policies, but Azure Policy is the underlying engine that defines and enforces them. Option d, Azure Update Manager, is a service for managing OS patches and updates, which is not related to configuration management like BitLocker or registry keys.
Q66. You are configuring Azure Site Recovery (ASR) to replicate your on-premises Hyper-V virtual machines to Azure for disaster recovery. You have already created a Recovery Services vault. What software component must be installed on your on-premises Hyper-V hosts to register them with the vault and manage the replication?
A) The Azure Arc agent
B) The Azure Site Recovery provider
C) The Log Analytics agent
D) A mobility service agent
Answer: B
Explanation
The correct answer is The Azure Site Recovery provider. When setting up ASR for Hyper-V environments, there are two key software components. The first is the Azure Site Recovery provider, which must be installed on the on-premises Hyper-V host server (or each node in a Hyper-V cluster). This provider is the “coordinator” on the on-premises side. It communicates with the Recovery Services vault in Azure, registers the Hyper-V host, discovers the VMs, and manages the entire replication process from the host. The second component is the mobility service agent (option d), which is installed inside the guest operating system of each VM being replicateD) This agent is responsible for capturing data changes at the block level and sending them. However, the question asks what must be installed on the Hyper-V hosts, which is the provider. Option a, the Azure Arc agent, is for onboarding servers for Azure management and is not part of the ASR data replication process. Option c, the Log Analytics agent, is for sending monitoring data to Azure Monitor, also unrelated to ASR replication.
Q67. You are implementing a three-node failover cluster using Storage Spaces Direct (S2D) on Windows Server 2022. All nodes have identical hardware, including two 1 TB NVMe drives and four 4 TB SSDs. When you enable Storage Spaces Direct, how will these drives be claimed and used by the storage pool by default?
A) All drives (NVMe and SSD) will be used for capacity.
B) The NVMe drives will be used as a write-only cache, and the SSDs will be used for capacity.
C) The NVMe drives will be used as a read-write cache, and the SSDs will be used for capacity.
D) The NVMe drives will be used for capacity, and the SSDs will be used as a read-write cache.
Answer: C
Explanation
The correct answer is The NVMe drives will be used as a read-write cache, and the SSDs will be used for capacity. When Storage Spaces Direct (S2D) initializes, it automatically claims all local, non-boot drives from the cluster nodes and organizes them into a storage pool. It also automatically configures a cache based on the drive types it finds. S2D identifies the “fastest” media (in this case, NVMe) and automatically configures it as a cache for the “slower” media (in this case, the SSDs). By default, this cache is configured as a read-write cache (or “multi-tiered”) to accelerate both read and write operations for the capacity tier. S2D will not use the NVMe drives for capacity in this configuration, as it prioritizes them for caching. Therefore, the 1 TB NVMe drives will become the cache, and the 4 TB SSDs will form the capacity tier that stores the actual datA) Option b is incorrect because the default cache is read-write, not write-only. Options a and d are incorrect because S2D will not mix cache-class media (NVMe) with capacity-class media (SSD) in the capacity tier, nor will it use the slower media (SSD) to cache for the faster media (NVMe).
Q68. You have an on-premises Windows Server 2012 R2 server that hosts a critical line-of-business application. You want to protect this server from a disaster by replicating it to Azure. You require a low Recovery Point Objective (RPO) and the ability to perform orchestrated, non-disruptive failover tests. The server must remain on-premises (i.e., it is not a migration). Which hybrid service is designed for this requirement?
A) Azure Backup
B) Storage Migration Service (SMS)
C) Azure Site Recovery (ASR)
D) Hyper-V Replica
Answer: C
Explanation
The correct answer is Azure Site Recovery (ASR). ASR is Microsoft’s native Disaster Recovery as a Service (DRaaS). It is explicitly designed to replicate on-premises workloads (both virtual and physical) to Azure to provide business continuity. ASR provides near-synchronous replication, which delivers a low Recovery Point Objective (RPO), meeting one of the key requirements. Its other main feature is the ability to create “Recovery Plans” to orchestrate failovers and, most importantly, to perform non-disruptive failover tests. This “test failover” feature spins up the replicated VMs in an isolated Azure network, allowing you to validate the application and recovery process without impacting the production on-premises server at all. Option a, Azure Backup, is a data protection service, not a DR service. It takes point-in-time backups, which typically results in a much higher RPO (e.g., hours) and cannot be used for an orchestrated, near-instant failover. Option b, Storage Migration Service (SMS), is a tool for migrating file servers, not for providing ongoing disaster recovery. Option d, Hyper-V Replica, is an on-premises-to-on-premises replication technology and does not natively replicate to Azure or offer the same level of orchestration as ASR.
Q69. A security administrator has enabled Windows Defender Application Control (WDAC) on a set of on-premises Windows Servers to enforce a strict application-whitelisting policy. An administrator now finds they are unable to install a critical, legitimate driver update from a new, trusted publisher. The update is blockeD) How must the administrator update the WDAC policy to allow this new driver?
A) Add the driver’s file hash to the WDAC policy.
B) Merge the new publisher’s certificate into the existing WDAC policy.
C) Disable WDAC, install the driver, and re-enable WDAC)
D) Add the driver file path to the AppLocker policy.
Answer: B
Explanation
The correct answer is to Merge the new publisher’s certificate into the existing WDAC policy. Windows Defender Application Control (WDAC) policies define what is trusted to run on a server. Policies are commonly built on “signer rules,” which trust any code signed by a specific publisher (e.g., “Microsoft Corporation”). When a new, trusted publisher’s driver needs to be allowed, the most secure and manageable way is to add that new publisher’s code-signing certificate to the policy. This is done by creating a new supplemental policy with the publisher rule, or by merging this new rule into the base policy using PowerShell cmdlets like Merge-CIPolicy. This allows all current and future drivers from this trusted publisher to run. Option a, adding the file hash, would work, but it is a brittle solution. It only trusts that one specific version of the driver file. When the next update for that driver comes out, its hash will be different, and the policy will have to be updated again. Signer rules are far more manageable. Option c is the “break-glass” approach and is highly discourageD) It temporarily opens the server to all risks and is not a manageable or auditable policy-based solution. Option d is incorrect because AppLocker is a separate, less secure technology. WDAC policies are distinct from AppLocker, and modifying AppLocker would have no effect on WDAC enforcement.
Q70. You are monitoring your hybrid environment using Azure Monitor. You have several on-premises servers connected via Azure Arc and several Azure IaaS VMs. You want to get a comprehensive view of the performance and health of all these machines, including a map of the network dependencies and running processes. Which specific Azure Monitor solution should you enable?
A) Boot Diagnostics
B) VM Insights
C) Microsoft Sentinel
D) Network Watcher
Answer: B
Explanation
The correct answer is VM Insights. VM Insights is a solution within Azure Monitor specifically designed to provide deep, “inside-the-box” monitoring for virtual machines. It has two main components: 1) The Health feature, which monitors the guest OS for performance issues (e.g., CPU, memory, disk) and health states. 2) The Map feature, which is the key differentiator. The Map feature, powered by the Dependency agent, discovers and visualizes all running processes on the VM and, crucially, the network dependencies (TCP connections) between that VM and other machines. This “map of network dependencies and running processes” is exactly what the prompt asks for. VM Insights can be deployed to both Azure IaaS VMs and on-premises servers (via Azure Arc), providing the single, unified view the user wants. Option a, Boot Diagnostics, is a troubleshooting tool for VMs that are failing to boot; it provides no runtime performance datA) Option c, Microsoft Sentinel, is a SIEM/SOAR tool for security event analysis, not performance monitoring. Option d, Network Watcher, monitors the Azure network fabric (vNets, NSGs, routing) but does not have insight into the processes inside the VM.
Q71. You are securing a Windows Server 2022 domain controller. A key security requirement is to protect against credential-theft attacks, such as Pass-the-Hash. You want to use a hardware-based solution that isolates the Local Security Authority Subsystem Service (LSASS) process in a virtualized, secure environment. The hardware supports virtualization (Intel VT-x/AMD-V) and a TPM 2.0. Which Windows security feature should you enable?
A) Windows Defender Application Control (WDAC)
B) Windows Defender Credential Guard
C) BitLocker Drive Encryption
D) Microsoft Defender for Identity
Answer: B
Explanation
The correct answer is Windows Defender Credential GuarD) This feature is designed for the exact purpose of mitigating credential-theft attacks like Pass-the-Hash and Pass-the-Ticket. It uses virtualization-based security (VBS) to run the Local Security Authority Subsystem Service (LSASS) process in a protected, isolated container. This isolation prevents malware running in the standard operating system from “dumping” the memory of LSASS to steal the NTLM hashes and Kerberos tickets stored there. Even an attacker with administrator-level privileges on the host OS cannot access the secrets protected by Credential GuarD) The prerequisites for this feature include the virtualization support and TPM mentioned in the prompt. Option a, WDAC, is an application-whitelisting feature; it controls what runs, but does not protect credentials in memory. Option c, BitLocker, protects data at rest on the disk; it does not protect credentials in memory. Option d, Microsoft Defender for Identity, is a service that detects these types of attacks by monitoring domain controller traffic, but it does not prevent the initial credential theft from memory on a compromised server. Credential Guard is the preventative control.
Q72. You have a Windows Server 2022 Azure IaaS VM with a data disk. You need to enable encryption for both the OS disk and the data disk. A strict corporate policy requires that you use your own encryption keys, which must be stored and managed in an Azure Key Vault. The solution must provide full-volume encryption inside the guest operating system. Which encryption solution should you use?
A) Storage Service Encryption (SSE) with Customer-Managed Keys (CMK)
B) Azure Disk Encryption (ADE) with a Key Encryption Key (KEK)
C) BitLocker configured manually from inside the VM
D) Storage Service Encryption (SSE) with Platform-Managed Keys (PMK)
Answer: B
Explanation
The correct answer is Azure Disk Encryption (ADE) with a Key Encryption Key (KEK). This question has several key requirements: 1) Encrypt both OS and data disks. 2) Use customer-managed keys. 3) Keys must be in Azure Key Vault. 4) Must be full-volume encryption inside the guest OS. Azure Disk Encryption (ADE) meets all these. ADE uses the BitLocker feature (on Windows) inside the guest OS to encrypt the volumes. To meet the key management requirement, you configure ADE to use a Key Encryption Key (KEK). In this model, the BitLocker keys are themselves encrypted (or “wrapped”) by a KEK that you own and store in your Azure Key Vault. This gives you full control over the master key, fulfilling the policy. Option a, SSE with CMK, is a different type of encryption. This encrypts the data “at rest” in the Azure storage cluster, outside the VM. It meets the “CMK” requirement but fails the “inside the guest OS” requirement. Option c is not a manageable or integrated Azure solution. Option d, SSE with PMK, is the default encryption for all Azure disks, but it uses Microsoft-managed keys and is not “inside the guest OS,” failing two requirements.
Q73. You are managing a large fleet of on-premises Windows Servers. You want to use a single, web-based interface to manage these servers, view their configurations, run PowerShell commands, and connect to them, replacing the need for many different MMC snap-ins. You also want this tool to be the gateway for integrating with Azure hybrid services like Azure Monitor and Azure Backup. What tool should you install on a gateway server in your on-premises network?
A) System Center Operations Manager (SCOM)
B) Windows Admin Center
C) Remote Server Administration Tools (RSAT)
D) Microsoft Sentinel
Answer: B
Explanation
The correct answer is Windows Admin Center. Windows Admin Center (WAC) is a modern, browser-based management tool for Windows Server. It is installed on a gateway server (or even a Windows 10/11 client) and provides a unified, web-based UI to manage all aspects of your on-premises servers—replacing tools like Server Manager, Event Viewer, Device Manager, and other MMC snap-ins. A primary design goal of WAC is to be the “on-ramp” to the clouD) It has deep integration with Azure, making it simple to onboard your on-premises servers to Azure Arc, Azure Monitor, Azure Backup, Azure Site Recovery, and other hybrid services. This perfectly matches the prompt’s description. Option a, SCOM, is a heavy, enterprise-scale monitoring solution, not a day-to-day server management tool, and it is not a lightweight, web-based UI. Option c, RSAT, is the collection of MMC snap-ins and PowerShell modules that WAC is designed to replace; it is not a unified web interface. Option d, Microsoft Sentinel, is a cloud-native SIEM for security analysis, not a server management tool.
Q74. You have an existing on-premises Windows Server 2016 failover cluster. You need to upgrade the cluster to Windows Server 2022 with no downtime for the Hyper-V workloads running on it. You have added two new Windows Server 2022 nodes to the cluster. The cluster is currently running in a mixed-mode state. What is the next critical step you must perform before you can evict the old Windows Server 2016 nodes?
A) Run the Update-ClusterFunctionalLevel
B) Live-migrate all virtual machines from the Windows Server 2016 nodes to the Windows Server 2022 nodes.
C) Install Cluster-Aware Updating (CAU) on the new nodes.
D) Configure a new Cloud Witness.
Answer: B
Explanation
The correct answer is to Live-migrate all virtual machines (or other cluster roles) from the old nodes to the new nodes. The Cluster Operating System (OS) Rolling Upgrade feature allows you to have a cluster run in a “mixed mode” with nodes of different OS versions (e.g., 2016 and 2022). However, this mode is meant to be temporary. Before you can decommission the old 2016 nodes, you must safely move all clustered roles (like VMs) off them and onto the new 2022 nodes. This is typically done using live migration to ensure no downtime. Once the 2016 nodes are “empty” (hosting no roles), you can then safely evict them from the cluster. Option a, running Update-ClusterFunctionalLevel, is the very last step. You can only run this cmdlet after all old nodes have been evicted from the cluster. Attempting to run it while 2016 nodes are still present will fail. Option c is a good practice but not the mandatory next step for the upgrade process itself. Option d is unrelated to the OS upgrade process; it’s a quorum configuration.
Q75. Your on-premises data center hosts a file server running on a physical Windows Server 2012. You plan to migrate this server to a new Windows Server 2022 IaaS VM in Azure using the Azure Migrate: Server Migration tool. This is a physical server, so you must use the agent-based migration methoD) What component must be installed on the on-premises Windows Server 2012 machine to capture and send data changes to Azure?
A) The Mobility service
B) The Azure Site Recovery provider
C) The Azure Arc agent
D) The Storage Migration Service agent
Answer: A
Explanation
The correct answer is The Mobility service. When you use the agent-based migration method in Azure Migrate (which is required for physical servers), two main components are involveD) First is the “replication appliance,” an on-premises VM that coordinates and compresses the replication traffiC) Second is the Mobility service (also called the Mobility agent), which must be installed directly on the source machine you want to migrate (in this case, the physical Windows Server 2012). This agent is responsible for capturing all data writes at the block level in real-time and sending this data to the replication appliance, which then forwards it to Azure. This is what enables the replication of the entire server’s state. Option b is incorrect; the ASR provider is used when replicating Hyper-V or VMware hosts, not for agent-based physical server migration. Option c, the Azure Arc agent, is for managing a server, not migrating it. Option d, the Storage Migration Service agent, is for file share migrations, not for a full-server (OS, apps, and data) “lift-and-shift” migration.
Q76. A security administrator is reviewing the security posture for all Windows Servers in Microsoft Defender for ClouD) A high-priority recommendation states “Vulnerabilities in security configuration on your machines should be remediateD)” The administrator needs to understand which specific operating system baselines, registry settings, or certificate configurations are non-compliant. What component of Defender for Cloud provides this detailed vulnerability assessment?
A) Just-In-Time (JIT) VM Access
B) Adaptive Application Controls
C) Microsoft Defender for Identity
D) Microsoft Defender for Endpoint (integrated with Defender for Cloud)
Answer: D
Explanation
The correct answer is Microsoft Defender for Endpoint (integrated with Defender for Cloud). Microsoft Defender for Cloud’s “Defender for Servers” plan integrates seamlessly with Microsoft Defender for Endpoint (MDE). A key feature of MDE is Threat and Vulnerability Management (TVM). This TVM component performs deep scans of the onboarded servers (both Azure VMs and Arc-enabled servers) and compares their configurations against security baselines. It is this MDE-powered feature that identifies specific vulnerabilities, such as misconfigured OS settings, missing security patches, or insecure registry keys, and reports them back to Defender for Cloud as the “Vulnerabilities in security configuration” recommendation. The other options are incorrect. Option a, JIT VM Access, controls network access to management ports. Option b, Adaptive Application Controls, is an application-whitelisting feature. Option c, Microsoft Defender for Identity, monitors Active Directory authentication traffiC) None of these are responsible for performing vulnerability assessments of the OS configuration.
Q77. You are managing a hybrid file-serving solution using Azure File SynC) Your primary cloud endpoint is an Azure file share containing 5 TB of datA) You have a new branch office with a 1 TB server that you want to configure as a server endpoint. You want users at the branch office to be able to access all 5 TB of data, but you want to ensure that only 100 GB of the most-accessed files are cached locally, with the rest of the data “tiered” to the clouD) What Azure File Sync feature must you enable and configure on the new server endpoint?
A) Cloud Tiering
B) Azure File Share snapshots
C) Storage Migration Service
D) Rapid Namespace Synchronization
Answer: A
Explanation
The correct answer is Cloud Tiering. This is the core feature of Azure File Sync that solves this exact problem. When Cloud Tiering is enabled on a server endpoint, the Azure File Sync agent (specifically, the storagesynC)sys file system filter driver) replaces the full file content on the local server with a “reparse point” or “pointer.” The full file remains in the Azure file share (the cloud endpoint). This allows a small-capacity server (1 TB) to provide access to a massive-capacity dataset (5 TB). You would configure the “Volume Free Space” policy to a high percentage (e.g., keep 900 GB free) or use the “Date Policy” to tier files not accessed recently. This ensures only the “hot” or recently accessed files are kept in the local cache, meeting the 100 GB requirement. When a user accesses a tiered file (the reparse point), the agent seamlessly “recalls” (downloads) the file from Azure. Option b is a backup feature. Option c is a migration tool. Option d describes a sync mechanism, not a tiering policy.
Q78. Your organization uses Microsoft Sentinel as its central SIEM. You have 50 on-premises Windows Servers that host sensitive datA) You need to collect security-related Windows Event Logs (such as logon failures, process creation, and account changes) from these servers and forward them to the Microsoft Sentinel workspace for analysis and threat hunting. These servers are not yet onboarded to Azure. What is the most appropriate way to ingest these specific logs into Sentinel?
A) Install the Azure Monitor Agent (AMA) on each server and create a Data Collection Rule (DCR).
B) Install the Log Analytics Agent (legacy) on each server and configure it in the Log Analytics workspace.
C) Install the Azure Arc agent on each server, then deploy the Azure Monitor Agent via an extension.
D) Configure Windows Event Forwarding (WEF) to a central collector, then install the Log Analytics Agent on the collector.
Answer: D
Explanation
The correct answer is Windows Event Forwarding (WEF) to a central collector, then installing the Log Analytics Agent on that collector. While options B and C are technically possible, this option is often considered the most efficient and scalable. Instead of installing and managing an agent on all 50 servers (which increases overhead and network traffic), you use a native Windows feature, Windows Event Forwarding (WEF). You configure the 50 “source” servers to forward their logs to a single “collector” server. You then only need to install one agent—the Log Analytics agent (or AMA via Arc on the collector)—on that one collector server. This agent ingests all the aggregated logs and sends them to Sentinel (which uses a Log Analytics workspace). This simplifies management, reduces the attack surface, and is a best practice for collecting Windows events at scale. Option b (legacy agent on all 50) works but is less efficient. Option c (Arc + AMA) is the modern approach if you are willing to onboard to Arc, but it still means 50 agents. Option d provides the best balance of native Windows features and efficient log ingestion.
Q79. You are implementing a new three-node, disaggregated Storage Spaces Direct (S2D) cluster. The compute nodes are separate from the storage nodes. You need to ensure that the network used for the S2D storage traffic (SMB 3) is prioritized and has guaranteed bandwidth, even when other traffic, like Live Migration, is also using the same physical network adapters. Which networking technology should you configure on the hosts and vSwitches?
A) Data Center Bridging (DCB)
B) Network Address Translation (NAT)
C) Virtual Private Network (VPN)
D) Hyper-V Network Virtualization (HNV)
Answer: A
Explanation
The correct answer is Data Center Bridging (DCB). DCB is a suite of IEEE standards that enables Converged Ethernet, allowing different types of traffic (like storage, management, and VM traffic) to coexist on the same 10+ GbE network fabriC) Its key feature is the ability to provide Quality of Service (QoS) by allocating a minimum guaranteed bandwidth percentage to specific traffic classes. For an S2D cluster, you would configure DCB to create a traffic class for “Storage” (SMB 3) and assign it a guaranteed bandwidth (e.g., 50%). This ensures that even if a massive Live Migration (another traffic class) starts, the critical, latency-sensitive S2D storage traffic will never be “starved” and will always have its 50% bandwidth available. This prioritization is essential for S2D performance and stability. The other options are incorrect. NAT and VPN are standard networking technologies for IP address translation and secure tunneling, respectively. HNV is a technology for network virtualization and isolation. None of them are used for bandwidth prioritization and traffic class management on a converged network fabriC)
Q80. You have an on-premises two-node Hyper-V cluster replicating its virtual machines to a second, identical cluster at a DR site using Hyper-V ReplicA) You are performing a planned failover to the DR site for maintenance. You run the Start-VMInitialReplication cmdlet on the primary site, but it fails. You have already successfully run a planned failover. What is the most likely reason this cmdlet is failing?
A) The VM is already running at the DR site.
B) You must use the Start-VMFailover cmdlet on the DR site first.
C) The cmdlet Start-VMInitialReplication is used to begin a new replication, not to reverse a completed one.
D) The firewall is blocking replication traffic from the primary site.
Answer: C
Explanation
The correct answer is that Start-VMInitialReplication is the wrong cmdlet for this situation. The Start-VMInitialReplication cmdlet is used only when you are setting up replication for a virtual machine for the very first time. It initiates the “initial replication” (IR) of the VM’s VHDs to the replica server. In this scenario, the administrator has already performed a planned failover. This means the VM at the DR site is now the “primary” and is running. To re-establish protection, the administrator needs to reverse the replication direction so that the (now primary) DR-site VM replicates back to the (now offline) original-site VM. The correct cmdlet for this, after a planned failover, would be Start-VMReplication with the -Reverse parameter. Using Start-VMInitialReplication is incorrect because the replication relationship already exists; it just needs to be reverseD) Option a is true, but it’s the result of the failover, not the reason this specific cmdlet fails. Option b is incorrect; Start-VMFailover is what the admin already did to fail over. Option d is possible but less likely, as the failover itself was successful, implying the network path was at least partially functional.