Visit here for our full Microsoft AZ-800 exam dumps and practice test questions.
Question 21
You have a server named Server1 that runs Windows Server. Server1 has the File Server role installed. You need to configure File Server Resource Manager (FSRM) to send an email notification when users exceed their storage quota. What should you configure first?
A) Email notifications in FSRM options
B) File screen templates
C) Storage reports
D) Classification rules
Answer: A
Explanation:
File Server Resource Manager is a comprehensive storage management tool that helps administrators control and manage the quantity and type of data stored on file servers. Before FSRM can send any email notifications for quota violations or other events, you must first configure the email notification settings in FSRM options. This configuration establishes the fundamental communication parameters that FSRM will use for all email-based notifications throughout the system.
The email notification configuration in FSRM options requires you to specify the SMTP server that will relay the notification emails, the administrator email addresses that should receive notifications, and the default from address for emails sent by FSRM. Without this basic configuration, FSRM cannot send emails regardless of how quotas, file screens, or other monitoring features are configured. This is a prerequisite step that must be completed before any email notification functionality can work in FSRM.
After configuring email notifications in FSRM options, you can then proceed to create storage quotas with threshold notifications that trigger email alerts when users approach or exceed their allocated storage limits. The system will use the SMTP settings you configured to deliver these notifications to specified administrators or users. This hierarchical configuration approach ensures that all notification features share common email infrastructure settings while allowing specific customization for individual quotas and monitoring rules.
Why other options are incorrect: B is incorrect because file screen templates define which file types are blocked or allowed on specific folders, preventing users from saving unauthorized file types. While file screens can also send email notifications, configuring templates does not establish the underlying email infrastructure needed for any FSRM notifications to function. C is incorrect because storage reports generate scheduled or on-demand reports about storage usage, file types, and other metrics. While reports can be emailed to administrators, you cannot configure reports or their delivery until the basic email notification settings are established in FSRM options. D is incorrect because classification rules automatically assign properties to files based on criteria like location, content, or file attributes. Classification is used for file management tasks but is unrelated to configuring the email infrastructure required for quota notifications.
Question 22
You have an Azure subscription that contains a virtual network named VNet1. You have an on-premises network that connects to VNet1 by using a Site-to-Site VPN. You need to configure Azure DNS private zones to allow name resolution between Azure and on-premises resources. What should you configure first in Azure?
A) A private DNS zone
B) A virtual network link
C) A DNS forwarder
D) A conditional forwarder
Answer: A
Explanation:
Azure DNS private zones provide name resolution for resources within Azure virtual networks and can be integrated with on-premises networks for hybrid DNS scenarios. To enable name resolution between Azure and on-premises resources, you must first create an Azure DNS private zone, which establishes the DNS namespace that will be used for resource registration and resolution. The private zone acts as the authoritative DNS zone for the specified domain within Azure and serves as the foundation for all subsequent DNS configuration.
Creating the private DNS zone involves specifying the domain name that will be used for resources, such as contoso.local or internal.contoso.com. This zone is private to Azure, meaning it is not accessible from the public internet and only provides name resolution to virtual networks that are explicitly linked to it. The private zone can automatically register Azure resources like virtual machines when they are deployed, or you can manually create DNS records for resources as needed.
After creating the private DNS zone, you can then link it to VNet1, which enables resources in that virtual network to query and resolve names within the zone. For complete hybrid name resolution, you would also configure on-premises DNS servers to conditionally forward queries for the private zone domain to Azure DNS, and configure Azure DNS to forward queries for on-premises domains to your on-premises DNS servers. However, all of these subsequent configurations depend on first establishing the private DNS zone itself.
Why other options are incorrect: B is incorrect because a virtual network link connects a private DNS zone to a virtual network, enabling resources in that network to use the zone for name resolution. However, you cannot create a virtual network link until the private DNS zone exists, making the private zone creation the first required step. C is incorrect because a DNS forwarder configuration would be set up on DNS servers to forward queries to other DNS servers, but this is a secondary configuration step that occurs after the private zone exists and is linked to the virtual network. D is incorrect because conditional forwarders are configured on DNS servers to forward queries for specific domains to designated DNS servers. This configuration would be part of integrating on-premises DNS with Azure DNS but cannot be done until the Azure private DNS zone is created and operational.
Question 23
You have a server named Server1 that runs Windows Server and has the Hyper-V server role installed. You create a virtual machine named VM1 on Server1. You need to add a virtual Fibre Channel adapter to VM1. What should you do first on Server1?
A) Create a virtual Fibre Channel SAN
B) Create a virtual switch
C) Enable NPIV on the physical Fibre Channel adapter
D) Install the Data Center Bridging feature
Answer: A
Explanation:
Virtual Fibre Channel in Hyper-V allows virtual machines to connect directly to Fibre Channel storage area networks, enabling scenarios like guest clustering and direct SAN access for virtual machines. Before you can add a virtual Fibre Channel adapter to a virtual machine, you must first create a virtual Fibre Channel SAN in Hyper-V. The virtual Fibre Channel SAN is a logical construct that associates physical Fibre Channel host bus adapters on the Hyper-V host with virtual machines, providing the connection pathway between the virtual environment and the physical Fibre Channel infrastructure.
Creating a virtual Fibre Channel SAN involves using Hyper-V Manager or PowerShell to define the SAN and associate it with one or more physical Fibre Channel adapters installed in Server1. This configuration establishes the mapping between the physical Fibre Channel ports and the virtual Fibre Channel infrastructure that virtual machines will use. The virtual SAN can be associated with multiple physical adapters for redundancy and load balancing, providing the same multipath capabilities that physical servers enjoy when connecting to Fibre Channel storage.
After the virtual Fibre Channel SAN is created, you can add virtual Fibre Channel adapters to VM1 and configure them to use the virtual SAN. Each virtual adapter receives unique World Wide Port Names that identify it to the Fibre Channel fabric, allowing the SAN administrators to zone and mask storage appropriately. The virtual Fibre Channel adapter in the VM appears as a standard Fibre Channel HBA to the guest operating system, enabling standard Fibre Channel storage operations without requiring iSCSI or other workarounds.
Why other options are incorrect: B is incorrect because virtual switches provide virtual network connectivity for virtual machines using Ethernet networking, not Fibre Channel connectivity. Virtual switches and virtual Fibre Channel SANs serve different purposes and are configured separately. C is incorrect because while NPIV must be supported and enabled on the physical Fibre Channel adapter for virtual Fibre Channel to function, this is typically enabled at the hardware or firmware level before configuring Hyper-V. The first configuration step within Hyper-V itself is creating the virtual Fibre Channel SAN. D is incorrect because Data Center Bridging is a set of networking enhancements for converged fabric scenarios, particularly for technologies like SMB Direct and RDMA over Ethernet. DCB is not required for or related to virtual Fibre Channel functionality.
Question 24
You have a Windows Server failover cluster named Cluster1. Cluster1 hosts a virtual machine named VM1. You need to configure VM1 to automatically restart on another cluster node if the virtual machine becomes unresponsive. What should you configure?
A) VM monitoring
B) Cluster-Aware Updating
C) Virtual machine priority
D) Drain on shutdown
Answer: A
Explanation:
VM monitoring is a failover clustering feature that provides application-level health monitoring for virtual machines running on a cluster. Unlike basic virtual machine high availability which only detects when a VM or host fails catastrophically, VM monitoring can detect when a virtual machine becomes unresponsive or when specific services within the VM stop functioning properly. When VM monitoring detects that a virtual machine is not responding according to configured health checks, it can automatically restart the VM on the same node or fail it over to another cluster node.
VM monitoring works by leveraging heartbeat signals and service monitoring within the guest operating system. The cluster monitors the VM’s heartbeat through integration services, and you can additionally configure monitoring for specific services or applications running inside the virtual machine. If the heartbeat is lost or monitored services fail for a specified duration, the cluster determines that the VM is unhealthy and takes corrective action. The action can include attempting to restart the VM in place, or if that fails or is not configured, failing the VM over to another cluster node where it is restarted.
Configuration of VM monitoring involves enabling the feature for the specific virtual machine through Failover Cluster Manager or PowerShell, specifying which services within the guest OS should be monitored, and defining the thresholds and actions to take when problems are detected. This provides a more sophisticated level of high availability than basic VM clustering, as it can detect and remediate application-level failures that might not cause the entire virtual machine to crash but still render it unable to serve its intended purpose.
Why other options are incorrect: B is incorrect because Cluster-Aware Updating automates the process of applying Windows updates to cluster nodes while maintaining availability by coordinating the update process across nodes. CAU does not monitor VM health or trigger restarts based on unresponsiveness. C is incorrect because virtual machine priority determines the relative importance of VMs when the cluster needs to make decisions about which VMs to start first during resource constraints or recovery scenarios. Priority does not provide health monitoring or automatic restart capabilities. D is incorrect because drain on shutdown is a setting that controls whether running VMs are live migrated off a node when the node is shut down gracefully. It relates to planned maintenance scenarios, not detecting and recovering from VM unresponsiveness.
Question 25
You have an Azure subscription that contains a storage account named storage1. You plan to use Azure File Sync to sync files from an on-premises file server to storage1. You need to prepare storage1 for Azure File Sync. What should you create in storage1?
A) A file share
B) A blob container
C) A queue
D) A table
Answer: A
Explanation:
Azure File Sync synchronizes on-premises file servers with Azure Files, which is the file share service within Azure Storage. To prepare storage1 for use with Azure File Sync, you must create an Azure file share within the storage account. This file share serves as the cloud endpoint where all synchronized files from the on-premises server will be stored. The file share provides SMB and REST protocol access to the files, enabling both Azure-based and on-premises resources to access the centralized data.
The file share in Azure Files acts as the authoritative copy of your data in the cloud. When you configure Azure File Sync, you create a sync group that includes this file share as the cloud endpoint and one or more on-premises file servers as server endpoints. The sync service then maintains consistency between all endpoints, ensuring that changes made on any endpoint are replicated to all others. The Azure file share can be accessed directly using the SMB protocol from Azure VMs or on-premises systems with appropriate connectivity, providing an additional access method beyond the cached copies on sync-enabled servers.
Creating the file share before configuring Azure File Sync allows you to verify connectivity and access permissions, and you can even pre-populate the share with data if needed. The file share should be sized appropriately for your data, considering that Azure Files has quotas and performance tiers that should match your workload requirements. Standard file shares can be up to 100 TiB in size when large file share feature is enabled, and premium file shares offer higher performance for demanding workloads.
Why other options are incorrect: B is incorrect because blob containers are used for Azure Blob Storage, which stores unstructured object data and is designed for scenarios like backups, media files, and big data analytics. Azure File Sync specifically requires Azure Files, not Blob Storage, as the cloud endpoint. C is incorrect because Azure Queue storage provides messaging between application components and is used for building distributed applications with decoupled components. Queues are not file storage and cannot be used with Azure File Sync. D is incorrect because Azure Table storage is a NoSQL key-value store used for storing structured data. Tables do not provide file system capabilities and are incompatible with Azure File Sync, which requires file share infrastructure.
Question 26
You have a server named Server1 that runs Windows Server. Server1 has the Web Server (IIS) role installed. You need to configure Server1 to require client certificate authentication for a website. What should you configure in IIS?
A) SSL Settings
B) Authentication
C) Request Filtering
D) Authorization Rules
Answer: A
Explanation:
Client certificate authentication in IIS requires clients to present valid digital certificates to access a website, providing strong authentication through public key infrastructure. To configure this requirement in IIS, you must modify the SSL Settings for the specific website or application. SSL Settings in IIS control various aspects of SSL/TLS configuration, including whether client certificates are ignored, accepted, or required for establishing connections to the website.
In the SSL Settings configuration, you have three options for client certificates: ignore client certificates, accept client certificates, or require client certificates. When you select require, IIS will refuse connections from any client that does not present a valid client certificate during the SSL/TLS handshake. The server validates the client certificate against trusted root certificate authorities and can optionally check certificate revocation status. This configuration provides mutual authentication where both the server and client prove their identities using certificates.
After configuring SSL Settings to require client certificates, you can implement additional authorization logic to determine which specific certificates or certificate attributes grant access to resources. IIS can map client certificates to Windows user accounts, allowing integration with traditional Windows authentication and authorization mechanisms. The SSL Settings configuration is the foundational requirement that enforces the presence of client certificates, while authorization rules determine what authenticated clients can access.
Why other options are incorrect: B is incorrect because the Authentication section in IIS configures various authentication methods like Anonymous, Basic, Windows, and Forms authentication, but does not control client certificate requirements. Client certificate authentication is configured through SSL Settings rather than the standard authentication methods section. C is incorrect because Request Filtering is a security feature that screens incoming requests based on rules and can block specific request patterns, query strings, or HTTP verbs. It does not handle client certificate validation or requirements. D is incorrect because Authorization Rules determine what authenticated users or groups can access specific resources, but they do not configure the requirement for client certificates. Authorization operates after authentication has already occurred.
Question 27
You have a server named Server1 that runs Windows Server and hosts several virtual machines. You need to move the virtual machine configuration files and virtual hard disks to a different location while the virtual machines continue to run. What should you use?
A) Storage Migration
B) Live Migration
C) Export and Import
D) Hyper-V Replica
Answer: A
Explanation:
Storage Migration in Hyper-V allows you to move virtual machine storage files, including virtual hard disks and configuration files, from one location to another while the virtual machine continues to run without downtime. This feature is essential for scenarios where you need to migrate VMs to new storage hardware, rebalance storage utilization across volumes, or move VMs off storage that requires maintenance. Storage Migration operates independently of Live Migration and does not require failover clustering or multiple Hyper-V hosts.
The storage migration process creates a mirror of the virtual machine’s storage at the destination location while the VM continues to operate from the original location. Hyper-V tracks all changes made to the storage during the migration process and ensures these changes are reflected in the copy. Once the initial copy is complete and all changes have been synchronized, Hyper-V performs a rapid cutover where it redirects all storage I/O operations to the new location. This cutover typically completes in seconds, causing no perceptible downtime for the running virtual machine.
Storage Migration can be performed through Hyper-V Manager by right-clicking a virtual machine and selecting Move, then choosing the option to move only the virtual machine’s storage. You can move all storage items to a single location, move different items to different locations, or move only specific virtual hard disks while leaving configuration files in place. PowerShell provides even more granular control through the Move-VMStorage cmdlet, allowing scripting and automation of storage migration operations.
Why other options are incorrect: B is incorrect because Live Migration moves a running virtual machine from one Hyper-V host to another, relocating the VM’s memory and processor state between physical servers. While Live Migration can optionally include storage migration when moving between hosts, it is not the appropriate tool for moving only storage on the same host. C is incorrect because Export and Import creates a copy of a virtual machine that can be imported on the same or different host, but this process requires shutting down the virtual machine. Export creates a portable copy, but it does not support moving storage while the VM continues to run. D is incorrect because Hyper-V Replica is a disaster recovery feature that creates and maintains a replica copy of a virtual machine on a different host or site. Replica is designed for business continuity, not for relocating storage on a single host while maintaining operation.
Question 28
You have an on-premises Active Directory Domain Services (AD DS) domain. You have an Azure subscription that contains an Azure SQL Managed Instance named SQL1. You need to configure SQL1 to support Windows Authentication for on-premises user accounts. What should you configure first?
A) Azure AD Connect
B) Azure AD Domain Services
C) A VPN gateway
D) Azure AD Application Proxy
Answer: A
Explanation:
Azure SQL Managed Instance can support Windows Authentication for on-premises Active Directory accounts through integration with Azure Active Directory. To enable this hybrid authentication scenario, you must first synchronize your on-premises AD DS user accounts to Azure Active Directory using Azure AD Connect. Azure AD Connect establishes the synchronization pipeline that copies user identities and credentials from your on-premises directory to Azure AD, creating the necessary identity foundation for Windows Authentication to function.
Azure AD Connect performs directory synchronization on a scheduled basis, ensuring that user accounts created or modified in the on-premises directory are reflected in Azure AD. For SQL Managed Instance Windows Authentication, you need to configure Azure AD Connect with password hash synchronization, pass-through authentication, or federation, depending on your organization’s authentication requirements and security policies. This synchronization creates a unified identity that allows on-premises users to authenticate to cloud resources using their domain credentials.
After Azure AD Connect is configured and synchronizing identities, you then configure Azure SQL Managed Instance to use Azure AD authentication and set up Kerberos authentication for the managed instance. This involves configuring the managed instance to trust Azure AD as an authentication provider and establishing the necessary Kerberos infrastructure using Azure AD Kerberos. The synchronized identities enable users to authenticate to SQL Managed Instance using their on-premises Windows credentials through Azure AD, providing a seamless authentication experience.
Why other options are incorrect: B is incorrect because Azure AD Domain Services provides managed domain services like domain join and group policy for Azure VMs, but it is not the primary requirement for enabling SQL Managed Instance Windows Authentication. Azure AD Connect provides the identity synchronization needed first. C is incorrect because while network connectivity between on-premises and Azure is necessary, a VPN gateway is not specifically required for Windows Authentication to SQL Managed Instance. Azure AD Connect can synchronize over internet connections, and the authentication flow works through Azure AD regardless of VPN presence. D is incorrect because Azure AD Application Proxy provides secure remote access to on-premises web applications by publishing them through Azure AD. It is not involved in synchronizing identities or enabling Windows Authentication for SQL Managed Instance.
Question 29
You have a server named Server1 that runs Windows Server. Server1 is configured as a DNS server. You need to configure DNS to use DNS-over-HTTPS (DoH) for outbound DNS queries. What should you configure on Server1?
A) DNS client settings
B) Forwarders
C) Root hints
D) Conditional forwarders
Answer: A
Explanation:
DNS-over-HTTPS is a protocol that encrypts DNS queries by sending them over HTTPS connections, providing privacy and security benefits by preventing eavesdropping and tampering with DNS traffic. When you want a Windows Server DNS server to use DoH for its outbound queries, you need to configure the DNS client settings on the server itself. The DNS client determines how the server resolves names when it needs to look up information on behalf of itself or when forwarding queries, and these client settings control whether DoH is used for these resolution operations.
Configuring DoH on the DNS client involves modifying the network adapter’s DNS server settings or using PowerShell to configure DoH for specific DNS servers. You specify which DNS servers support DoH and should be used with encrypted transport. For example, you might configure the server to use public DoH providers like Cloudflare or Google’s DNS over HTTPS services. The configuration ensures that when Server1’s DNS client component needs to resolve names, it uses encrypted HTTPS connections to the specified DNS servers rather than traditional unencrypted UDP or TCP DNS queries.
This configuration provides end-to-end encryption for DNS queries originating from Server1, protecting the privacy of DNS lookups and preventing man-in-the-middle attacks or DNS manipulation. The DNS client settings are separate from the DNS server role configuration, which determines how Server1 responds to queries from other clients. By configuring the DNS client to use DoH, you ensure that Server1’s own name resolution requests are secured, which is particularly important when the server needs to resolve internet names or forward queries to upstream DNS servers.
Why other options are incorrect: B is incorrect because forwarders configure where the DNS server sends queries it cannot resolve authoritatively, but configuring a forwarder alone does not enable DoH. Traditional forwarder configuration uses standard DNS protocols unless the DNS client itself is configured to use DoH when communicating with those forwarders. C is incorrect because root hints provide the addresses of root DNS servers used for recursive resolution, but they do not control the protocol or encryption used for queries. Root hints are part of the DNS server role configuration, not the mechanism for enabling DoH. D is incorrect because conditional forwarders direct queries for specific domains to designated DNS servers, but like standard forwarders, they do not inherently enable DoH. The underlying DNS client settings determine whether DoH is used when querying conditional forwarders.
Question 30
You have a server named Server1 that runs Windows Server. You plan to use Server1 as a software-defined networking (SDN) gateway. You need to install the required Windows Server role for SDN gateway functionality. Which role should you install?
A) Remote Access
B) Network Controller
C) Hyper-V
D) Routing and Remote Access
Answer: A
Explanation:
Software-defined networking in Windows Server enables centralized configuration and management of network infrastructure through virtualization and automation. An SDN gateway provides connectivity between virtual networks and external networks, including site-to-site VPN connections, point-to-site VPN connections, and forwarding traffic between virtual networks and physical networks. To deploy an SDN gateway, you must install the Remote Access role on the server, which provides the underlying gateway functionality required for SDN network connectivity scenarios.
The Remote Access role in Windows Server includes components that enable various network access scenarios, including VPN, DirectAccess, and routing. For SDN gateway deployments, the Remote Access role provides the gateway services that are managed and orchestrated by the Network Controller. When deployed as an SDN gateway, the Remote Access role operates under the control of the Network Controller, which configures tenant connections, routing policies, and network traffic handling based on the SDN policy configuration.
After installing the Remote Access role, you configure the server as an SDN gateway by connecting it to the Network Controller and assigning it to a gateway pool. The Network Controller then provisions gateway connections on the server as needed for tenant networks. Multiple gateway servers can be deployed in pools for capacity and redundancy, with the Network Controller automatically distributing tenant connections across available gateway resources.
Why other options are incorrect: B is incorrect because the Network Controller role is the centralized management component that orchestrates SDN infrastructure, but it does not itself function as a gateway. Network Controller manages gateways but does not provide gateway connectivity services. You would install Network Controller on separate management servers, not on the gateway servers themselves. C is incorrect because while Hyper-V is required for creating virtualized SDN infrastructure and hosts virtual machines and virtual switches, it does not provide gateway functionality. Hyper-V hosts the virtual networks but does not route traffic between virtual and physical networks. D is incorrect because Routing and Remote Access is the legacy name for remote access services in older Windows Server versions. In current versions, this functionality is provided by the Remote Access role, making this answer technically obsolete or referring to the same underlying role with different terminology.
Question 31
You have a Windows Server failover cluster. You plan to perform maintenance on one of the cluster nodes. You need to prevent the node from hosting clustered roles during the maintenance window. What should you do?
A) Pause the node
B) Stop the cluster service
C) Evict the node
D) Drain roles from the node
Answer: D
Explanation:
Draining roles from a cluster node is the proper procedure for preparing a node for planned maintenance while maintaining cluster health and service availability. When you drain roles from a node, the cluster gracefully moves all running clustered roles, including virtual machines and other resources, from that node to other available cluster nodes. This operation uses live migration for virtual machines and controlled failover for other clustered roles, ensuring minimal disruption to services while clearing the target node for maintenance.
The drain operation is intelligent and considers factors like resource availability on destination nodes, preferred owners, and current utilization when determining where to move clustered roles. After the drain completes, the node remains a member of the cluster and continues to participate in quorum voting and cluster communication, but it no longer hosts any active workloads. This state is ideal for performing hardware maintenance, installing updates, or making configuration changes that require the node to be free of production workloads while maintaining its cluster membership.
You can initiate the drain operation through Failover Cluster Manager by right-clicking the node and selecting Pause and Drain Roles, or through PowerShell using the Suspend-ClusterNode cmdlet with the Drain parameter. After maintenance is complete, you resume the node, making it available to host clustered roles again. The cluster can then automatically or manually rebalance roles back to the node as appropriate.
Why other options are incorrect: A is incorrect because pausing a node without draining it prevents new roles from being placed on the node but does not move existing roles off the node. Paused nodes continue to run their current workloads, which could interfere with maintenance activities. You should drain roles rather than simply pause when preparing for maintenance. B is incorrect because stopping the cluster service on a node causes an uncontrolled interruption where all roles on that node fail over immediately without graceful migration. This creates unnecessary disruption and potential downtime for services, unlike the controlled migration provided by draining. C is incorrect because evicting a node permanently removes it from the cluster membership. Eviction is used when decommissioning a node, not for temporary maintenance. Evicted nodes must be re-added to the cluster after maintenance, requiring reconfiguration of cluster resources and potentially disrupting quorum.
Question 32
You have an Azure subscription and an on-premises Active Directory Domain Services domain. You plan to deploy Azure Virtual Desktop. You need to ensure that users can use their on-premises credentials to authenticate to Azure Virtual Desktop session hosts. What should you implement?
A) Hybrid Azure AD join
B) Azure AD Application Proxy
C) Azure AD Pass-through Authentication
D) Azure AD B2B collaboration
Answer: A
Explanation:
Azure Virtual Desktop requires session host virtual machines to be joined to an identity system for user authentication and management. To enable users to authenticate with their on-premises Active Directory credentials, the session hosts must be configured as hybrid Azure AD joined devices. Hybrid Azure AD join connects devices to both the on-premises Active Directory domain and Azure Active Directory simultaneously, enabling seamless authentication and single sign-on experiences for users accessing the Azure Virtual Desktop environment.
When session hosts are hybrid Azure AD joined, users can sign in using their on-premises domain credentials, which are synchronized to Azure AD through Azure AD Connect. The hybrid join provides the necessary trust relationship between the on-premises domain, Azure AD, and the Azure Virtual Desktop infrastructure. This configuration allows the session hosts to authenticate users against on-premises Active Directory while also being manageable through Azure AD and Intune for cloud-based policy enforcement and conditional access capabilities.
The hybrid Azure AD join process involves configuring Azure AD Connect to synchronize computer objects from on-premises AD to Azure AD, and configuring the appropriate service connection points in Active Directory. Session hosts that are domain-joined to the on-premises AD automatically register with Azure AD through this process, creating the hybrid identity relationship. This approach provides the best user experience as it maintains compatibility with existing on-premises infrastructure while enabling cloud-based management and security controls.
Why other options are incorrect: B is incorrect because Azure AD Application Proxy provides secure remote access to on-premises web applications by publishing them through Azure AD, but it does not provide domain join or authentication infrastructure for Azure Virtual Desktop session hosts. Application Proxy serves a different purpose in hybrid scenarios. C is incorrect because Azure AD Pass-through Authentication is an authentication method that allows users to use the same on-premises passwords in Azure AD, but it does not provide domain join functionality for session hosts. While useful for authentication, it does not fulfill the requirement for session host identity integration. D is incorrect because Azure AD B2B collaboration enables organizations to share applications with external users from other organizations, but it is not related to domain joining session hosts or enabling on-premises credential authentication for Azure Virtual Desktop.
Question 33
You have a server named Server1 that runs Windows Server and has the DHCP Server role installed. You need to configure DHCP to assign IPv6 addresses to clients. What should you create on Server1?
A) A DHCPv6 scope
B) A superscope
C) A multicast scope
D) An IPv6 reservation
Answer: A
Explanation:
DHCPv6 is the IPv6 equivalent of DHCP for IPv4, providing automated configuration of IPv6 addresses and network parameters to client computers. To enable Server1 to assign IPv6 addresses through DHCP, you must create a DHCPv6 scope, which defines the range of IPv6 addresses available for assignment to clients along with associated configuration options like DNS servers and domain names. A DHCPv6 scope is conceptually similar to a DHCP scope for IPv4 but uses IPv6 address formats and follows the stateful address autoconfiguration process defined in IPv6 standards.
Creating a DHCPv6 scope involves specifying the IPv6 address prefix that will be used for client assignments, typically a /64 prefix which is standard for IPv6 subnets. You also configure the scope with options specific to IPv6, such as DNS recursive name servers, domain search lists, and other parameters that clients need for network operation. DHCPv6 operates differently from DHCPv4 in several ways, including the use of link-local addresses for communication and the ability for clients to use stateless address autoconfiguration in conjunction with or instead of DHCPv6.
After creating the DHCPv6 scope, you must activate it and ensure that the appropriate DHCPv6 options are configured. Clients must be configured to use DHCPv6 for address assignment, which may involve router advertisements that indicate managed address configuration should be used. The DHCPv6 server listens on UDP port 547 and responds to client requests with address assignments and configuration parameters from the configured scope.
Why other options are incorrect: B is incorrect because a superscope is a collection of multiple IPv4 scopes that allows a DHCP server to provide addresses from multiple logical subnets on a single physical network segment. Superscopes are an IPv4 concept and do not apply to IPv6 address assignment through DHCPv6. C is incorrect because multicast scopes are used for IPv4 multicast address allocation through MADCAP protocol, not for standard IPv6 unicast address assignment. Multicast scopes serve a specialized purpose and do not provide general IPv6 addressing. D is incorrect because an IPv6 reservation ensures that a specific IPv6 address is always assigned to a particular client based on its DUID, but you cannot create reservations without first having a DHCPv6 scope from which to reserve addresses. The scope must exist before individual reservations can be configured.
Question 34
You have a Windows Server server named Server1 that has the Hyper-V role installed. On Server1, you create a virtual machine named VM1 that uses a dynamically expanding virtual hard disk. The virtual hard disk file grows to 200 GB. You delete a large amount of data from VM1 and the actual used space within the virtual machine is now 50 GB. You need to reduce the size of the virtual hard disk file to reclaim unused space on the physical storage. What should you do?
A) Compact the virtual hard disk
B) Convert the disk to a fixed-size virtual hard disk
C) Run the Optimize-VHD cmdlet
D) Perform a storage migration
Answer: A or C
Explanation:
Dynamically expanding virtual hard disks automatically grow as data is written to them, but they do not automatically shrink when data is deleted from within the virtual machine. To reclaim the unused space and reduce the size of the virtual hard disk file on the physical storage, you need to compact the virtual hard disk. Compacting removes the white space from the VHD or VHDX file, reducing its physical size to more closely match the actual data stored within it.
The compaction process can be performed through Hyper-V Manager by editing the virtual hard disk properties and selecting the compact option, or through PowerShell using the Optimize-VHD cmdlet which performs the same operation. Before compacting, it is recommended to shut down the virtual machine to ensure data consistency, although some compaction operations can be performed while the VM is running if the virtual hard disk is not currently in use. The compaction operation analyzes the virtual hard disk file structure and removes allocated but unused blocks, resulting in a smaller file size on the host’s storage.
Both compacting through the GUI and using Optimize-VHD accomplish the same goal of reclaiming unused space. The Optimize-VHD cmdlet provides additional options and can be scripted for automation, while the GUI method is more accessible for manual operations. After compaction, the virtual hard disk file size should more closely reflect the actual data usage within the virtual machine, freeing up valuable physical storage space on Server1.
Why other options are incorrect: B is incorrect because converting a dynamically expanding disk to a fixed-size disk creates a virtual hard disk that occupies the full allocated space regardless of actual data usage, which would actually increase the file size rather than reducing it. Fixed-size disks offer performance benefits but do not reclaim unused space. D is incorrect because storage migration moves virtual machine storage files from one location to another but does not reduce the size of the virtual hard disk file. Migration simply relocates the files without compacting or optimizing them to reclaim unused space.
Question 35
You have a server named Server1 that runs Windows Server. Server1 has the File Server role installed. You need to configure Server1 to encrypt all SMB traffic. What should you configure?
A) SMB encryption in Server Manager
B) BitLocker Drive Encryption
C) IPsec connection security rules
D) EFS file encryption
Answer: A
Explanation:
SMB encryption provides end-to-end encryption of SMB data transfers, protecting file share traffic from eavesdropping and man-in-the-middle attacks. When you enable SMB encryption on Server1, all SMB 3.0 and later clients that connect to file shares can use encrypted connections, ensuring that data in transit is protected without requiring additional infrastructure like IPsec or VPNs. SMB encryption can be configured globally for the entire server or on a per-share basis through Server Manager or PowerShell.
To configure SMB encryption globally on Server1, you access the File and Storage Services section in Server Manager, navigate to the Shares section, and modify the SMB settings to require or enable encryption for all shares. Alternatively, you can use PowerShell cmdlets like Set-SmbServerConfiguration with the EncryptData parameter to enforce encryption at the server level. When configured at the server level, all shares on Server1 will use encryption, and clients must support SMB 3.0 or later to connect. For per-share encryption, you can configure individual shares to require encryption while leaving others unencrypted.
SMB encryption uses AES-CCM or AES-GCM algorithms depending on the negotiated SMB version and provides transparent encryption without requiring client configuration beyond supporting SMB 3.0 or later. The encryption is negotiated during the SMB session establishment and operates independently of the underlying network transport. This makes it an ideal solution for protecting file share traffic across untrusted networks without the complexity of configuring IPsec policies or deploying VPN infrastructure.
Why other options are incorrect: B is incorrect because BitLocker Drive Encryption protects data at rest by encrypting entire volumes on the server’s storage, but it does not encrypt data in transit over the network. BitLocker protects against physical theft of drives but does not secure SMB traffic between clients and servers. C is incorrect because while IPsec connection security rules can encrypt network traffic between hosts, they operate at the network layer and require configuration on both clients and servers. SMB encryption is simpler to deploy and operates at the application layer specifically for SMB protocol traffic. D is incorrect because EFS encrypts individual files on NTFS volumes, protecting them from unauthorized access when stored on disk. EFS provides file-level encryption at rest but does not encrypt SMB network traffic when files are accessed over the network.
Question 36
You have an Azure subscription that contains a Recovery Services vault named Vault1. You back up several on-premises Windows servers to Vault1 using the Microsoft Azure Recovery Services agent. You need to ensure that backup data is retained for 10 years to meet regulatory compliance requirements. What should you configure in Vault1?
A) Backup policy retention settings
B) Soft delete settings
C) Storage replication type
D) Vault credentials
Answer: A
Explanation:
Azure Backup retention policies control how long backup data is retained in the Recovery Services vault before it is automatically deleted. To meet regulatory compliance requirements for retaining backup data for 10 years, you must configure the retention settings in the backup policy associated with your on-premises Windows servers. Backup policies define the schedule for backups and specify retention ranges for daily, weekly, monthly, and yearly backup points, allowing you to maintain different retention periods for different recovery point types.
The retention configuration in backup policies is highly flexible, allowing you to specify that daily backups are retained for a certain number of days, weekly backups for a number of weeks, monthly backups for a number of months, and yearly backups for a number of years. For a 10-year retention requirement, you would configure the yearly retention setting to keep backup points for 10 years. You can specify which backup point each year should be retained, such as the first backup of the year, allowing for long-term archival of point-in-time recovery points while managing storage costs for more recent frequent backups.
When you modify retention settings in a backup policy, the changes apply to future backup points. Existing recovery points retain their original retention schedules unless you explicitly modify them. The backup service automatically manages the lifecycle of recovery points according to the configured retention policy, deleting expired backups to optimize storage costs while ensuring compliance requirements are met. Azure Backup supports retention periods of up to 99 years for yearly backup points, providing extensive flexibility for long-term data retention scenarios.
Why other options are incorrect: B is incorrect because soft delete is a security feature that prevents immediate deletion of backup data and provides a grace period during which deleted backups can be recovered. While soft delete protects against accidental or malicious deletion, it does not control the primary retention period for compliance purposes. Soft delete typically retains deleted data for an additional 14 days. C is incorrect because storage replication type determines how backup data is replicated within Azure, with options like locally redundant storage or geo-redundant storage. Replication affects durability and disaster recovery capabilities but does not control retention duration or compliance with data retention policies. D is incorrect because vault credentials are security files used to register servers with the Recovery Services vault during initial setup. Credentials authenticate servers to the vault but do not control retention policies or how long backup data is kept.
Question 37
You have a server named Server1 that runs Windows Server and has the DNS Server role installed. You need to configure DNS to prevent Server1 from resolving queries for a specific domain name. What should you create?
A) A stub zone
B) A conditional forwarder with an invalid IP address
C) A primary zone with no records
D) A DNS policy with a DENY action
Answer: D
Explanation:
DNS policies in Windows Server provide granular control over how a DNS server handles queries based on criteria such as client subnet, query type, time of day, and more. To prevent Server1 from resolving queries for a specific domain name, you should create a DNS policy with a DENY action that matches queries for that domain. DNS policies are evaluated before normal query processing, allowing you to block resolution of specific domains at the DNS server level, effectively preventing clients using Server1 from accessing those domains through DNS.
Creating a DNS policy to block a domain involves using PowerShell to define a policy that matches the specific fully qualified domain name or domain suffix you want to block, then configuring the policy action as DENY. When a client queries for the blocked domain, the DNS policy intercepts the query and returns a failure response without performing any lookup or forwarding operations. This approach is commonly used to implement DNS-based content filtering, prevent access to malicious domains, or enforce organizational policies about prohibited websites or services.
DNS policies provide a flexible and efficient mechanism for query filtering because they are evaluated early in the query processing pipeline and can be configured with various criteria and actions. You can create multiple policies with different priorities to handle complex filtering requirements, and policies can be scoped to specific zones or applied globally to the server. The DENY action specifically tells the DNS server to refuse to answer queries matching the policy, making it the appropriate solution for preventing domain resolution.
Why other options are incorrect: A is incorrect because a stub zone contains only the name server records for a delegated domain and is used to maintain delegation information and improve name resolution efficiency. Stub zones do not prevent resolution of domains but rather facilitate resolution by providing authoritative name server information. B is incorrect because while creating a conditional forwarder with an invalid IP address might cause resolution failures, it is not a proper or reliable method for blocking domain resolution. Queries would timeout rather than being explicitly denied, and this approach could cause performance issues and does not represent a supported configuration. C is incorrect because creating a primary zone with no records would make Server1 authoritative for that domain, but clients would still receive authoritative responses indicating the domain exists with no records. This does not prevent resolution but rather provides incorrect information suggesting the domain is valid but empty.
Question 38
You have a Windows Server failover cluster named Cluster1 that hosts a Scale-Out File Server named SOFS1. You need to add storage to SOFS1. The solution must ensure that the storage is available from all cluster nodes simultaneously. What type of storage should you add?
A) Cluster Shared Volume (CSV)
B) Disk witness
C) Traditional clustered disk
D) iSCSI virtual disk
Answer: A
Explanation:
Scale-Out File Server is a specific file server cluster role designed for server application workloads that requires all cluster nodes to actively serve file share requests simultaneously. This active-active architecture provides superior performance and scalability compared to traditional active-passive file clustering. To support this simultaneous multi-node access, Scale-Out File Server requires storage configured as Cluster Shared Volumes, which is the only storage type that allows multiple cluster nodes to read and write to the same storage concurrently.
Cluster Shared Volumes use a coordinated file system architecture where all nodes in the cluster can access the CSV storage simultaneously through a shared namespace. The CSV coordinator node handles metadata operations while allowing direct I/O from all nodes, eliminating bottlenecks and enabling true active-active file serving. When you add storage to a Scale-Out File Server, you must add disks to the cluster and then configure them as CSVs, which places them in the C:\ClusterStorage directory accessible from all nodes.
The CSV architecture is essential for Scale-Out File Server because it enables features like transparent failover, where clients automatically reconnect to another node if their current node fails, and SMB Multichannel, which aggregates bandwidth across multiple network interfaces. Without CSV, storage would be owned by a single node at a time, preventing the active-active operation that makes Scale-Out File Server performant and scalable. All storage added to a Scale-Out File Server must be configured as CSV to function properly with the role.
Why other options are incorrect: B is incorrect because a disk witness is a specific type of cluster resource used for quorum purposes, providing a vote in cluster membership decisions. Disk witnesses are small disks that store cluster configuration information but are not used for storing file server data and are not accessible for general storage operations. C is incorrect because traditional clustered disks use an active-passive ownership model where only one node owns and accesses the disk at any given time. This storage type is incompatible with Scale-Out File Server’s requirement for simultaneous multi-node access and would prevent the active-active operation essential to the role. D is incorrect because iSCSI virtual disk describes the transport protocol for presenting storage, not the cluster storage configuration. While you can use iSCSI as the underlying transport for presenting storage to the cluster, the storage must still be configured as CSV rather than traditional clustered disks to work with Scale-Out File Server.
Question 39
You have an Azure subscription that contains a virtual machine named VM1 that runs Windows Server. You need to enable Azure Disk Encryption on VM1. What should you create first in the Azure subscription?
A) An Azure Key Vault
B) A Recovery Services vault
C) A storage account
D) A managed identity
Answer: A
Explanation:
Azure Disk Encryption protects virtual machine disks by encrypting them using BitLocker for Windows VMs or dm-crypt for Linux VMs, ensuring data at rest is protected from unauthorized access. To implement Azure Disk Encryption, you must first create an Azure Key Vault in your subscription. The Key Vault stores the encryption keys and secrets used to encrypt and decrypt the virtual machine disks, providing secure key management and meeting compliance requirements for cryptographic key protection.
The Azure Key Vault must be configured with appropriate access policies that allow the Azure Disk Encryption service to access it for storing and retrieving encryption keys. When you enable disk encryption on VM1, the encryption process generates encryption keys and stores them securely in the Key Vault. The virtual machine is granted access to retrieve these keys during boot and operation, allowing the operating system to decrypt the disk and function normally while maintaining encryption at rest.
Creating the Key Vault before enabling disk encryption is mandatory because Azure Disk Encryption requires a location to securely store encryption keys. The Key Vault and the virtual machine should be in the same Azure region to minimize latency and ensure proper operation. Additionally, you must enable the Key Vault for disk encryption by setting the appropriate access policy, which can be done through the Azure portal, PowerShell, or CLI during or after Key Vault creation.
Why other options are incorrect: B is incorrect because a Recovery Services vault is used for backup and disaster recovery services through Azure Backup and Azure Site Recovery. While Recovery Services vaults store backup data, they are not used for managing encryption keys for Azure Disk Encryption. C is incorrect because while storage accounts are used for many Azure storage scenarios, Azure Disk Encryption does not require a separate storage account for its operation. Managed disks are already associated with built-in storage, and encryption key management is handled through Key Vault, not storage accounts. D is incorrect because while managed identities provide Azure resources with automatically managed identities for authenticating to Azure services, creating a managed identity is not a prerequisite for enabling Azure Disk Encryption. The encryption service uses different mechanisms for accessing the Key Vault.
Question 40
You have a server named Server1 that runs Windows Server and is configured as a domain controller for contoso.com. You need to delegate the ability to reset user passwords for users in a specific organizational unit (OU) to a user named User1. What should you use?
A) Active Directory Users and Computers delegation wizard
B) Group Policy Management Console
C) Active Directory Administrative Center
D) Authorization Manager
Answer: A
Explanation:
Delegating administrative permissions in Active Directory allows you to grant specific users or groups the ability to perform particular tasks without giving them broader administrative rights. To delegate the ability to reset user passwords for users in a specific OU to User1, you should use the Delegation of Control Wizard in Active Directory Users and Computers. This wizard provides a structured interface for granting granular permissions on Active Directory objects, including the common task of password reset delegation.
The Delegation of Control Wizard is accessed by right-clicking the target OU in Active Directory Users and Computers and selecting Delegate Control. The wizard guides you through selecting the user or group to whom you want to delegate permissions, in this case User1, and then presents common tasks including resetting user passwords and forcing password changes at next logon. When you select the password reset task, the wizard configures the appropriate Access Control List permissions on the OU, granting User1 the Reset Password extended right for user objects within that OU and its sub-OUs.
This delegation approach follows the principle of least privilege by granting only the specific permissions needed for User1 to perform password resets without providing broader administrative capabilities like creating or deleting user accounts, modifying group memberships, or accessing other sensitive operations. After delegation is complete, User1 can use Active Directory Users and Computers, Active Directory Administrative Center, or PowerShell to reset passwords for users within the delegated OU while being prevented from performing these operations on users in other OUs or performing other administrative tasks.
Why other options are incorrect: B is incorrect because Group Policy Management Console is used for creating, managing, and linking Group Policy Objects that configure computer and user settings across the domain. GPMC does not provide interfaces for delegating Active Directory object permissions or configuring security on organizational units. C is incorrect because while Active Directory Administrative Center can be used to view and manage Active Directory objects and could potentially be used to manually configure permissions, it does not provide a delegation wizard specifically designed for common delegation tasks. The delegation wizard in Active Directory Users and Computers is the appropriate tool. D is incorrect because Authorization Manager is used for role-based access control in applications and does not manage Active Directory permissions or delegation. Authorization Manager creates application-specific authorization policies and is not used for delegating Active Directory administrative tasks.