Click here to access our full set of Microsoft AZ-801 exam dumps and practice tests.
Q81. You are managing a hybrid environment with on-premises Windows Servers connected via Azure ArC) A new security policy requires that only a specific list of approved applications is allowed to run on these servers. All other applications, including any future unknown malware, must be blocked by default. Which technology should you implement and manage?
A) Just-In-Time (JIT) VM Access
B) Windows Defender Credential Guard
C) Windows Defender Application Control (WDAC)
D) Microsoft Defender for Identity
Answer: C
Explanation
The correct technology is Windows Defender Application Control (WDAC). WDAC is a strict, “allow-list” (or “whitelisting”) security feature. It moves the server from a default “allow-all” model to a “deny-all” model, where only applications, drivers, and scripts that are explicitly trusted in the WDAC policy are allowed to execute. This directly meets the requirement to “block all other applications” and “unknown malware by default.” WDAC policies can be deployed via GPO, Intune, or other tools and can be managed for Azure Arc-enabled servers.
Option a, Just-In-Time (JIT) VM Access, is incorrect. JIT is a feature of Microsoft Defender for Cloud that locks down network management ports (like RDP/SSH) and only opens them on demanD) It controls network access to the server, not application execution on the server.
Option b, Windows Defender Credential Guard, is incorrect. Credential Guard uses virtualization-based security (VBS) to isolate and protect credentials (like NTLM hashes and Kerberos tickets) in memory. It prevents credential theft attacks like Pass-the-Hash, but it does not control which applications can run.
Option d, Microsoft Defender for Identity, is incorrect. This is a cloud-based service that monitors on-premises Active Directory domain controller traffic to detect and investigate identity-based threats and attacks. It is a detection tool, not an application execution prevention tool.
Q82. You need to migrate an on-premises physical server running Windows Server 2012 to an Azure IaaS VM. You plan to use the Azure Migrate: Server Migration tool’s agent-based methoD) Which two components must be deployed in your on-premises environment to facilitate the discovery, assessment, and replication?
A) The Azure Migrate appliance and the Mobility service agent
B) The replication appliance and the Mobility service agent
C) The Azure Arc agent and the Azure Monitor agent
D) The Storage Migration Service orchestrator and the Mobility service agent
Answer: B
Explanation
For an agent-based migration of a physical server using Azure Migrate, two key on-premises components are requireD) First is the replication appliance (also known as the process server), which is a dedicated on-premises VM that coordinates the migration. It compresses, encrypts, and sends the replication data to Azure. Second is the Mobility service agent, which must be installed directly on the source physical server (the Windows Server 2012 machine). This agent captures all data writes at the block level and sends them to the replication appliance.
Option a is incorrect because the “Azure Migrate appliance” is used for the discovery and assessment of VMware and Hyper-V environments (often agentlessly). For physical servers, the dedicated “replication appliance” handles the migration/replication part.
Option c is incorrect. The Azure Arc agent is for managing an on-premises server from Azure, and the Azure Monitor agent is for monitoring. Neither is part of the Azure Migrate replication data path.
Option d is incorrect. The Storage Migration Service is a completely different tool used for migrating file shares, not for lifting and shifting an entire physical server OS and its applications to an Azure VM.
Q83. A user reports that they accidentally deleted a critical directory from an on-premises server that is acting as a server endpoint for Azure File SynC) This server has Cloud Tiering enableD) The deletion has synced to the cloud endpoint, and the directory is now gone from the Azure file share. You have confirmed that soft delete is enabled on the Azure file share. Where must you go to recover the deleted directory?
A) The Recycle Bin on the on-premises server
B) The Azure File Sync “Deleted Items” log in the Storage Sync Service
C) The “Soft deleted shares” section of the Storage Account
D) The “Show soft deleted items” option within the File Share’s “Browse” view in the Azure portal
Answer: D
Explanation
The correct recovery method is to use the soft delete feature on the Azure file share itself. When soft delete is enabled, deleted files or directories are not permanently purgeD) Instead, they are moved to a hidden, soft-deleted state. To recover them, you must navigate to the specific file share within the storage account in the Azure portal, select the “Browse” tab, and then check the “Show soft deleted items” box. This will make the deleted directory visible (usually grayed out), allowing you to right-click and “Undelete” it. Once undeleted in the cloud endpoint, Azure File Sync will sync this “new” item back down to all server endpoints.
Option a is incorrect. Deletions on a server endpoint (especially over a network share) typically bypass the local Recycle Bin.
Option b is incorrect. While the Storage Sync Service logs sync activity, it does not have a “Deleted Items” recovery feature. The recovery is done at the storage account level.
Option c is incorrect. “Soft deleted shares” is for recovering an entire file share that was deleteD) This is different from recovering files or directories within a share, which is what “soft deleted items” (option d) is for.
Q84. You are managing a four-node Storage Spaces Direct (S2D) cluster. You need to perform maintenance on one of the nodes, S2D-Node-03, which requires a reboot. You want to gracefully move all virtual machines off this node and pause all cluster activity for the node without evicting it. Which PowerShell cmdlet should you run first?
A) Stop-ClusterNode -Name S2D-Node-03
B) Suspend-ClusterNode -Name S2D-Node-03
C) Remove-ClusterNode -Name S2D-Node-03
D) Set-ClusterNode -Name S2D-Node-03 -Status Draining
Answer: B
Explanation
The correct cmdlet for this scenario is Suspend-ClusterNode. This command is specifically designed for temporary maintenance. When you run Suspend-ClusterNode, the cluster service does two things:
It automatically initiates a drain of all active cluster roles (like virtual machines) from that node, live-migrating them to other nodes in the cluster.
Once the node is empty, it places the node into a paused state. In this state, the node is still a member of the cluster, but it cannot host any roles and will not participate in quorum voting.
This is the standard, graceful procedure for patching or rebooting a cluster node. After maintenance is complete, you use Resume-ClusterNode to bring it back into active service.
Option a, Stop-ClusterNode, abruptly stops the cluster service on the node, which is not a graceful drain and can cause an unplanned failover.
Option c, Remove-ClusterNode, permanently evicts the node from the cluster. This is a destructive action used for decommissioning a node, not for temporary maintenance.
Option d is not a valid cmdlet syntax for this purpose. The draining is an implicit part of the Suspend-ClusterNode commanD)
Q85. Your company has a two-node on-premises failover cluster for a file share. The cluster nodes are in the same rack. You want to configure a cluster witness that is not dependent on any other local hardware (like a shared disk or a separate file server) and will protect the cluster from a datacenter-wide power outage. You have an Azure subscription. What is the most resilient witness type to configure?
A) Disk Witness
B) File Share Witness
C) Cloud Witness
D) Node Majority
Answer: C
Explanation
The most resilient option is the Cloud Witness. A Cloud Witness uses a small blob file in an Azure Storage Account as the witness vote. Its primary advantage is that it is geographically independent of the on-premises datacenter. If the entire on-premises site (including both cluster nodes) loses power or network connectivity, the Cloud Witness in Azure remains online. This provides a true third-party “arbitrator” to prevent split-brain scenarios and is the recommended witness type for any cluster with a reliable internet connection.
Option a, Disk Witness, is incorrect because it requires shared storage (which may not be available) and, more importantly, it is located in the same datacenter. A site-wide outage would take the Disk Witness offline along with the nodes.
Option b, File Share Witness, is incorrect for the same reason. It requires a third server to host the share, and that server is also likely in the same datacenter, making it vulnerable to the same site-wide outage.
Option d, Node Majority, is not a witness type. For a two-node cluster, Node Majority is not a valid quorum configuration. A two-node cluster must have a witness to be resilient to a single node failure (to provide the third vote).
Q86. You are using Windows Admin Center to manage your hybrid environment. You need to onboard several on-premises Windows Server 2016 servers to Azure for management. Your goal is to be able to apply Azure policies, view them in Microsoft Defender for Cloud, and deploy Azure Monitor extensions to them. What Azure technology must you first deploy to these servers via Windows Admin Center?
A) Azure File Sync agent
B) Azure Arc for servers
C) Azure Site Recovery provider
D) Azure Automation Hybrid Runbook Worker
Answer: B
Explanation
The core technology that enables on-premises servers to be managed by Azure services is Azure Arc for servers. Azure Arc projects your on-premises machines as resources inside Azure Resource Manager (ARM). Once a server is Arc-enabled, it gets an Azure resource ID and can be managed just like an Azure VM. This allows you to apply Azure policies, see its compliance in Microsoft Defender for Cloud, and use Azure management services to deploy extensions like the Azure Monitor Agent. Windows Admin Center provides a streamlined, wizard-based interface to install the Azure Arc agent and onboard your servers.
Option a is incorrect. Azure File Sync is a specific service for synchronizing file shares; it is not a general-purpose management bridge.
Option c is incorrect. The ASR provider is for disaster recovery (replication), not for general-purpose management.
Option d is incorrect. A Hybrid Runbook Worker is a component of Azure Automation that allows runbooks to execute on on-premises machines. While it is a hybrid service, it is a consumer of the Azure Arc management plane, not the enabler of it. You typically deploy the runbook worker after the server is Arc-enableD)
Q87. A security administrator wants to monitor your on-premises Active Directory domain controllers for advanced threats, such as Pass-the-Ticket and Golden Ticket attacks. The solution must use behavioral analytics and report its findings to a cloud-based portal. What service should be deployed?
A) Microsoft Defender for Cloud
B) Microsoft Defender for Endpoint
C) Microsoft Defender for Identity
D) Azure AD Identity Protection
Answer: C
Explanation
The service designed specifically for this purpose is Microsoft Defender for Identity. This is a cloud-based security solution that leverages on-premises sensors installed on your domain controllers. These sensors monitor AD authentication traffic (Kerberos, NTLM) and events. The service then uses behavioral analytics and threat intelligence in the cloud to detect, investigate, and report on advanced identity-based attacks, including Pass-the-Ticket and Golden Ticket attacks, reconnaissance, and more.
Option a, Microsoft Defender for Cloud, is a broad security posture management (CSPM) and workload protection (CWPP) solution. It protects servers and other Azure resources but does not specialize in analyzing AD authentication protocols.
Option b, Microsoft Defender for Endpoint, is an endpoint detection and response (EDR) solution that protects client and server operating systems from malware and file-based attacks. It does not have the deep understanding of Active Directory protocols that Defender for Identity has.
Option d, Azure AD Identity Protection, is a similar concept but for Azure Active Directory. It analyzes sign-in risk and user behavior in the cloud, not for your on-premises Active Directory.
Q88. You are planning to use the Storage Migration Service (SMS) to migrate an old Windows Server 2008 R2 file server to a new Windows Server 2022 VM in Azure. What is the role of the Windows Server that runs the Storage Migration Service orchestrator?
A) It acts as the final destination for all the migrated files.
B) It installs the SMS agent on all source and destination servers.
C) It coordinates the migration, performing the inventory, transfer, and cutover.
D) It replicates the source server’s OS to Azure as an IaaS VM.
Answer: C
Explanation
The Storage Migration Service orchestrator is the “brain” of the entire migration process. This is the server (typically managed via Windows Admin Center) that you use to manage the joB) Its role is to:
Inventory: Connect to the source server(s) to discover all the file shares, data, and configurations.
Transfer: Manage the high-speed, multi-threaded transfer of data and permissions from the source server(s) to the destination server(s).
Cutover: Perform the final, critical step of assuming the source server’s identity (its name and IP address) to redirect clients to the new server with minimal downtime.
Option a is incorrect. The orchestrator is the manager of the migration; the Windows Server 2022 VM in Azure is the destination.
Option b is incorrect. SMS does not use agents in the same way other services do. The orchestrator communicates with the source and destination servers using standard RPC and SMB protocols.
Option d is incorrect. SMS migrates the file server role and data only. It does not perform a “lift-and-shift” of the entire operating system. That is the job of Azure Migrate.
Q89. An administrator is troubleshooting a Windows Server 2022 Azure VM that is failing to boot. The administrator has already reviewed the Boot Diagnostics screenshot and serial log. They now need to interact with the VM’s bootloader or use the Special Administration Console (SAC) to try and repair the OS. Which Azure feature provides this interactive, command-line access?
A) Azure Bastion
B) Serial console
C) VM insights (Azure Monitor)
D) Network Watcher
Answer: B
Explanation
The feature that provides this capability is the Serial console. The Serial console in the Azure portal provides a direct, interactive, text-based connection to the VM’s COM1 serial port. This allows an administrator to interact with the server before the OS (or networking) has fully loadeD) It is the primary tool for interacting with the Windows Special Administration Console (SAC), accessing the bootloader (BCD) to make repairs, or troubleshooting a server that has lost its network connectivity.
Option a, Azure Bastion, is a service that provides secure RDP and SSH access to a VM. It requires the VM’s operating system and networking stack to be fully functional, so it is useless if the VM is not booting.
Option c, VM insights, is a monitoring solution that requires an agent inside the OS to be running. It provides no interactive access.
Option d, Network Watcher, is a tool for diagnosing the Azure network (vNets, NSGs, etC)), not for interacting with the VM’s console.
Q90. You are configuring a four-node Storage Spaces Direct (S2D) cluster on-premises. The cluster nodes are in a single site. The S2D storage pool has been createD) You now need to create a new 10 TB virtual disk (Volume) to host Hyper-V VMs. You need to ensure the volume can tolerate the failure of any two nodes simultaneously. Which resiliency setting should you choose when creating the volume?
A) Three-way mirror
B) Mirror-accelerated parity
C) Dual parity
D) Nested resiliency
Answer: A
Explanation
For a four-node cluster, the standard resiliency setting to tolerate the failure of two nodes is three-way mirror. A three-way mirror maintains three copies of all data, with each copy placed on a different fault domain (in this case, on different nodes). In a four-node cluster, this means that even if two nodes fail, a complete copy of the data is still available on at least one of the remaining two nodes, allowing the volume to stay online.
Option b and c (parity) are more capacity-efficient but are generally used for archival data or in larger-scale clusters. While dual parity can tolerate two failures, it has higher write latency and is less recommended for “hot” Hyper-V VM workloads than mirroring. Three-way mirror is the standard for high performance and two-node failure tolerance.
Option d, Nested resiliency, is a special resiliency type designed specifically for two-node S2D clusters. It provides two-failure tolerance on a two-node cluster by combining a two-way mirror with two-way parity. It is not applicable to a four-node cluster.
Q91. You need to enable encryption for a Windows Server 2022 Azure IaaS VM. The data must be encrypted at rest in the Azure storage infrastructure, but you do not need to manage the encryption keys yourself. You want to use the default, most straightforward encryption method that is enabled on all managed disks. What is this type of encryption called?
A) Azure Disk Encryption (ADE)
B) Storage Service Encryption (SSE) with Platform-Managed Keys (PMK)
C) BitLocker Drive Encryption
D) Storage Service Encryption (SSE) with Customer-Managed Keys (CMK)
Answer: B
Explanation
The default encryption enabled for all Azure managed disks is Storage Service Encryption (SSE) with Platform-Managed Keys (PMK). This provides encryption-at-rest. The “Storage Service” part means the encryption happens in the Azure storage hardware, outside the VM (as data is written to the physical disk). The “Platform-Managed Keys” part means Microsoft manages the entire key lifecycle (creation, rotation, storage), so there is no key management overhead for you. This perfectly matches the requirement for a default solution where you do not need to manage the keys.
Option a, Azure Disk Encryption (ADE), is incorrect. ADE uses BitLocker inside the VM to encrypt the OS. It is not the default and requires more setup.
Option c, BitLocker, is the technology ADE uses, but it’s not the name of the Azure service. Manually enabling BitLocker is not the default Azure methoD)
Option d, SSE with CMK, is also encryption-at-rest, but the “Customer-Managed Keys” part contradicts the requirement of not needing to manage the keys yourself.
Q92. You are managing a large-scale on-premises environment. You want to automate the remediation of a common configuration issue using a PowerShell script. You want to run this script centrally from Azure, but it needs to execute on the on-premises servers. You have already onboarded these servers to Azure ArC) Which Azure service should you use to host and run your script against these Arc-enabled servers?
A) Azure Automation with a Hybrid Runbook Worker
B) Azure Policy with a remediation task
C) Azure Functions
D) Azure Site Recovery (ASR)
Answer: A
Explanation
The service designed for this is Azure Automation with a Hybrid Runbook Worker. Azure Automation is a cloud-based service for process automation. To make it “hybrid,” you deploy a Hybrid Runbook Worker agent to your on-premises servers (or, on modern Arc-enabled servers, this is a built-in extension). This worker “listens” to the Azure Automation service. You can then create a PowerShell runbook in your Azure Automation account and “target” it to run on the hybrid worker group. The script is stored in Azure but executes with local context on your on-premisesserver, allowing it to remediate local configuration issues.
Option b, Azure Policy, is used for auditing and enforcing a desired state. While it can trigger remediation, Azure Automation is the primary service for executing complex, ad-hoc scripts and automation workflows.
Option c, Azure Functions, is a serverless compute service for running event-driven code. While it could be part of a complex solution, it’s not the primary tool for running administrative scripts on on-premises servers.
Option d, ASR, is a disaster recovery service and has no function for running automation scripts.
Q93. A new Windows Server 2022 VM in Azure, VM1, has been deployeD) An administrator reports they cannot RDP to the server, even though the NSG rules allow RDP from their IP. They suspect a misconfiguration inside the OS, such as the Windows Firewall being enabled or the RDP service being stoppeD) What is the fastest way to run a command inside VM1’s guest OS from the Azure portal to check the status of the RDP service (Get-Service TermService) without making an RDP connection?
A) Use the Serial console to log in and run PowerShell.
B) Use Azure Bastion to connect and run PowerShell.
C) Use the “Run command” feature on the VM’s blade.
D) Use Azure Automation Hybrid Runbook Worker.
Answer: C
Explanation
The fastest and most direct method is the “Run command” feature. This feature is available on the VM’s blade in the Azure portal and allows you to execute arbitrary PowerShell scripts or commands (like Get-Service TermService or ipconfig) directly inside the VM’s guest OS. It works via the VM agent, does not require any network connectivity (like RDP or SSH) to the VM, and is a non-interactive way to quickly run a script and see the output. This makes it the perfect tool for troubleshooting a service or firewall rule that is blocking RDP access.
Option a, Serial console, is a valid troubleshooting tool, but it requires you to interactively log in through the text-based console, which is slower than just running a single commanD)
Option b, Azure Bastion, would fail for the same reason RDP is failing: it provides RDP-like access, and if the RDP service is stopped, Bastion will also fail to connect.
Option d, Azure Automation, would work if a Hybrid Runbook Worker was already configured, but this is a complex setup. “Run command” is built-in and available immediately for this exact purpose.
Q94. You are configuring a Windows Server 2022 failover cluster to host a highly available SQL Server instance. The cluster nodes are SQL-Node1 and SQL-Node2. You have created the clustered role for SQL Server. You need to ensure that clients can connect to the SQL Server instance using a stable network name (e.g., SQL-Prod-LNR) that automatically moves to the active node during a failover. What cluster component provides this functionality?
A) A Cluster Shared Volume (CSV)
B) A Client Access Point (CAP)
C) Cluster-Aware Updating (CAU)
D) A Storage Spaces Direct (S2D) volume
Answer: B
Explanation
The component that provides this is the Client Access Point (CAP). A CAP is a core cluster resource that consists of two parts:
A Network Name resource (the name clients use to connect, like SQL-Prod-LNR).
An IP Address resource (a “floating” IP address that moves with the network name).
When you create a clustered role like SQL Server, a CAP is created as part of it. This CAP is “owned” by whichever node is currently active. If SQL-Node1 fails, the cluster service automatically moves the CAP (both the name and the IP) to SQL-Node2. SQL-Node2 then answers for that IP and name. This allows clients to connect to the single, consistent name SQL-Prod-LNR without ever knowing which node is actually hosting the service.
Option a and d are storage components. CSV and S2D provide the shared, highly-available storage that the SQL database files would live on, but they do not provide the network name for clients to connect to.
Option c, CAU, is the patch management feature for clusters; it is not a networking component.
Q95. Your on-premises data center hosts a critical application on a VMware VM. You are using Azure Site Recovery (ASR) to replicate this VM to Azure for disaster recovery. You need to perform a regular DR drill to validate that the application will function correctly after a failover. A key requirement is that this test must not impact the production on-premises VM, which must continue replicating. What ASR feature should you use?
A) Planned Failover
B) Unplanned Failover
C) Test Failover
D) Re-protect
Answer: C
Explanation
The feature designed for this exact purpose is Test Failover. A Test Failover is a non-disruptive DR drill. When you initiate a Test Failover, ASR does the following:
Creates a new, isolated virtual network in Azure (or uses one you specify).
Creates a new Azure VM from the selected recovery point (e.g., the latest replicated data).
Attaches this test VM to the isolated network.
Because the test VM is on an isolated network, it can boot up, and you can test the application without it ever interfering with the production on-premises VM. Crucially, during the entire test, replication from the production VM to Azure continues without interruption. This perfectly meets the “no impact” and “continue replicating” requirements.
Options a and b (Planned/Unplanned Failover) are “real” failovers. They shut down the on-premises VM (in the planned case) and break replication, committing the workload to Azure. This is disruptive and not a test.
Option d, Re-protect, is the action you take after a real failover to reverse replication back to the on-premises site.
Q96. You are using Azure File Sync to synchronize data from an on-premises Windows Server 2019 server to an Azure file share. This server endpoint has Cloud Tiering enabled with a “Volume Free Space” policy set to 20%. The server’s 1 TB data volume is now 90% full. What action will the Azure File Sync agent take?
A) It will stop synchronizing new files until space is manually freeD)
B) It will “recall” files from Azure to fill the remaining 10% of the volume.
C) It will begin “tiering” the coolest (least-accessed) files to the cloud, replacing them with reparse points to meet the 20% free space policy.
D) It will send an alert to Azure Monitor but take no action.
Answer: C
Explanation
The defined behavior of the Cloud Tiering “Volume Free Space” policy is to maintain a specified percentage of free space on the volume. The current state is 10% free (90% full), and the policy goal is 20% free. To achieve this, the Azure File Sync agent’s filter driver (storagesynC)sys) will automatically begin the tiering process. It identifies the “coolest” files (those least recently accessed) that are still fully cached locally. It then replaces the file’s data content with a small pointer (a reparse point), freeing up the local disk space. It will continue this process until the volume’s free space reaches the 20% target.
Option a is incorrect. The agent will not stop syncing; it will actively manage the local cache to make room.
Option b is the opposite of the correct action. Recalling files consumes local space, which is what the agent is trying to free up.
Option d is incorrect. The agent will take direct action; it doesn’t just send an alert.
Q97. You have a set of on-premises Windows Servers that are not onboarded to Azure ArC) You need to collect specific Windows Event Logs (e.g., System, Security) and forward them to a Log Analytics workspace for analysis in Azure Monitor. What is the legacy agent that can be installed on these servers to send data directly to a Log Analytics workspace without an Azure Arc dependency?
A) The Azure Monitor Agent (AMA)
B) The Log Analytics Agent (MMA)
C) The Dependency Agent
D) The Azure Site Recovery Mobility Service
Answer: B
Explanation
The correct agent is the Log Analytics Agent (MMA), also known as the Microsoft Monitoring Agent. This is the legacy agent that was designed to install directly on any Windows (or Linux) machine, whether in Azure or on-premises, and report data directly to a Log Analytics workspace. It is configured with a Workspace ID and Key and does not require Azure ArC)
Option a, the Azure Monitor Agent (AMA), is the new, modern agent. However, to install AMA on an on-premises server, it requires that the server first be onboarded with the Azure Arc agent. Since the prompt specifies the servers are not Arc-enabled, AMA cannot be useD)
Option c, the Dependency Agent, is an auxiliary agent that is used with the MMA or AMA) Its sole purpose is to collect network dependency data for the VM Insights “Map” feature; it does not collect event logs.
Option d, the Mobility Service, is the agent for Azure Site Recovery (disaster recovery) and does not collect logs for Azure Monitor.
Q98. You are configuring a new four-node Storage Spaces Direct (S2D) cluster. A network administrator has advised you to use Network ATC to manage the host networking configuration. What is the primary benefit of using Network ATC?
A) It automates the deployment and configuration of cluster networking (vSwitches, adapters, RDMA) based on a declared “intent.”
B) It automatically migA_rates file shares from old servers to the S2D cluster.
C) It provides Just-In-Time (JIT) access to the cluster nodes.
D) It automatically configures Hyper-V Replica for disaster recovery.
Answer: A
Explanation
The primary benefit of Network ATC (Network “Abstract T-Shirt” Configuration) is that it dramatically simplifies and standardizes the complex task of configuring host networking for an S2D or Azure Stack HCI cluster. Instead of manually creating vSwitches, vNICs, and configuring complex settings like RDMA, QoS, and Data Center Bridging (DCB) on every node, you simply declare an “intent.” For example, you declare an intent for “Storage” and “Management” traffiC) Network ATC then takes over and automatically deploys the correct vSwitches, configures the physical adapters, and ensures the configuration is identical and compliant across all nodes in the cluster.
Option b is incorrect. The tool for migrating file shares is the Storage Migration Service (SMS).
Option c is incorrect. JIT access is a security feature in Microsoft Defender for ClouD)
Option d is incorrect. Hyper-V Replica is a separate DR feature. Network ATC configures the underlying network, but not the Hyper-V Replica feature itself.
Q99. Your organization uses Microsoft Defender for Cloud across all Azure VMs and Azure Arc-enabled servers. A security recommendation with a high severity score, “Machines should be configured securely,” is appearing for many servers. You need to investigate which specific OS-level security baselines or registry keys are misconfigureD) This information is provided by what underlying feature?
A) The Threat and Vulnerability Management (TVM) component of Microsoft Defender for Endpoint
B) Adaptive Application Controls (AAC)
C) The Azure Policy Guest Configuration extension
D) The Log Analytics agent’s security event collection
Answer: A
Explanation
This specific recommendation is populated by the Threat and Vulnerability Management (TVM) feature, which constitutes a core component of the integrated Microsoft Defender for Endpoint (MDE) solution that provides comprehensive endpoint detection and response capabilities combined with proactive vulnerability and configuration assessment functionalities. When organizations enable Microsoft Defender for Servers (either Plan 1 or Plan 2) within their Azure subscriptions, the service automatically provisions Microsoft Defender for Endpoint licenses for protected servers and deploys the lightweight MDE sensor agent to both Azure virtual machines and hybrid-connected on-premises or multi-cloud servers through seamless integration with Azure Arc, establishing continuous security monitoring and assessment capabilities. The Threat and Vulnerability Management component embedded within MDE performs sophisticated deep scans of operating system configurations, installed software inventories, security settings, registry configurations, service states, and system parameters, comparing discovered configurations against industry-recognized security baselines and hardening standards including Center for Internet Security (CIS) Benchmarks, Microsoft Security Baselines, Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs), and National Institute of Standards and Technology (NIST) frameworks. This continuous assessment engine identifies configuration weaknesses including insecurely configured registry keys that enable deprecated protocols or weak authentication methods, improper OS security settings such as disabled User Account Control or permissive file system permissions, missing security updates and patches that leave systems vulnerable to known exploits, weak password policies failing to enforce complexity or expiration requirements, unnecessary services running with excessive privileges, and countless other configuration vulnerabilities that attackers could exploit to compromise systems or escalate privileges. These identified configuration issues are automatically surfaced within Microsoft Defender for Cloud’s security recommendations dashboard as actionable “security configuration” findings with detailed remediation guidance, risk severity ratings, affected resource inventories, and contextual information explaining why each configuration represents a security concern and how to properly remediate it.
Option B, Adaptive Application Controls (AAC), serves an entirely different security purpose by implementing application whitelisting policies based on machine learning analysis of normal application execution patterns across server workloads, automatically generating Windows Defender Application Control (WDAC) policies that permit only approved applications to execute while blocking unauthorized or suspicious executables, but AAC does not perform operating system baseline configuration scanning, registry auditing, or security setting assessments that identify configuration vulnerabilities reported in Defender for Cloud recommendations. Option C, Azure Policy Guest Configuration (now evolved into Azure Automanage Machine Configuration), represents a related but distinct compliance and configuration management feature that deploys configuration assessment agents to virtual machines and Arc-enabled servers to audit in-guest settings against custom or built-in policy definitions, enabling organizations to enforce configuration compliance requirements and detect configuration drift across their server estate, and while Guest Configuration does perform OS-level auditing that can identify some configuration issues, the vulnerability assessment recommendations specifically highlighting security misconfigurations against industry baselines within Defender for Cloud are distinctly driven by the more comprehensive and security-focused Microsoft Defender for Endpoint Threat and Vulnerability Management engine rather than the compliance-oriented Guest Configuration assessments. Option D, the Log Analytics agent (Microsoft Monitoring Agent), functions as a telemetry collection component that gathers event logs, performance metrics, syslog data, and custom log sources from monitored systems and forwards this data to Log Analytics workspaces for centralized storage, querying, and analysis through Azure Monitor capabilities, but the Log Analytics agent operates as a passive data collector rather than an active configuration scanner, lacking the capability to assess system configurations against security baselines, identify misconfigured settings, or generate vulnerability findings that appear as security recommendations. Therefore, the Threat and Vulnerability Management feature within Microsoft Defender for Endpoint represents the definitive source for security configuration vulnerability recommendations that appear in Microsoft Defender for Cloud, providing continuous, automated assessment of OS configurations against industry security standards.
Q100. You are performing a planned migration of an on-premises Hyper-V VM to Azure using Azure Migrate: Server Migration. You have already completed the initial replication. You are now in the migration window and are ready to perform the final cutover. The source VM must be shut down, and the Azure VM must be brought online with the final data changes. Which migration action should you perform?
A) Test failover
B) Migrate
C) Stop replication
D) Re-protect
Answer: B
Explanation
The correct action for the final cutover is Migrate. When you select the “Migrate” option in Azure Migrate, it initiates the definitive production cutover process that transitions the on-premises workload to Azure as the primary running instance, completing the migration journey and establishing the Azure VM as the operational production system. This comprehensive process executes several critical steps in sequence to ensure data consistency, minimize downtime, and successfully establish the migrated workload in Azure. First, it optionally shuts down the on-premises source Hyper-V virtual machine based on administrator configuration, ensuring that no new data writes occur during the final synchronization phase and guaranteeing that the Azure VM will contain a perfectly consistent snapshot of the application state without any in-flight transactions or uncommitted changes that could result in data corruption or application inconsistencies. Second, it performs one final delta replication cycle that captures and transfers any last-minute data modifications, file changes, registry updates, or configuration alterations that occurred on the source VM since the last incremental replication completed, ensuring that the Azure destination contains the absolute latest data state including changes made right up until the shutdown moment. Third, it creates the new production Azure virtual machine from the fully replicated virtual disk data, applying the target VM configuration including VM size, network interface assignments, public IP addresses if configured, availability set or availability zone placement, and managed disk types specified during migration planning. Finally, it performs cleanup operations including removing replication tracking metadata, deallocating replication resources, and marking the migration as complete within Azure Migrate’s tracking systems, transitioning the project state from “replicating” to “migrated” status.
Option A, Test failover, serves an entirely different purpose as a non-disruptive disaster recovery validation mechanism that creates an isolated Azure VM in a separate test virtual network for verification, testing, and validation purposes without affecting the source on-premises VM which continues running normally and without disrupting ongoing replication relationships, making it appropriate for pre-migration validation where administrators verify application functionality, network connectivity, performance characteristics, and operational readiness before committing to the actual production cutover, but Test failover explicitly does not constitute the final migration action and leaves both source and test VMs operational simultaneously. Option C, Stop replication, performs a destructive operation that immediately terminates the replication partnership between on-premises source and Azure destination, removing all replication tracking metadata and halting incremental synchronization without creating or starting any Azure VM, meaning that executing Stop replication before completing the Migrate action would result in catastrophic data loss since the partially replicated Azure disks would be abandoned in an incomplete state with no viable path to recovery or completion, making this option appropriate only after successful migration completion when administrators want to remove replication infrastructure for decommissioned source systems. Option D, Re-protect, represents functionality specific to Azure Site Recovery disaster recovery workflows where after failing over from a primary site to secondary site, administrators configure reverse replication to protect the now-active secondary site by replicating changes back to the original primary location in preparation for eventual failback, but Re-protect exists outside the Azure Migrate migration workflow and serves disaster recovery rather than migration scenarios, having no relevance to the one-way permanent migration process that Azure Migrate orchestrates for transitioning on-premises workloads to Azure cloud infrastructure. Therefore, Migrate represents the definitive, correct action that performs the final production cutover, establishing the Azure VM as the operational production system while ensuring data consistency through controlled source shutdown and final delta synchronization