Visit here for our full Microsoft AZ-800 exam dumps and practice test questions.
Question 81
You have a Windows Server 2022 server named Server1 that hosts a file share. You need to configure Storage Sense to automatically delete files from the share that haven’t been accessed in 90 days. What should you do first?
A) Enable Storage Sense in Settings
B) Install the File Server Resource Manager role
C) Configure a File Server Resource Manager quota
D) Create a Storage Sense policy using Group Policy
Answer: B
Explanation:
The correct answer is option B. To automatically delete files based on access patterns on a Windows Server file share, you need to use File Server Resource Manager (FSRM), not Storage Sense. Storage Sense is primarily a client-side feature designed for Windows 10 and Windows 11 devices to manage local storage automatically.
File Server Resource Manager is a server role service that provides tools to manage and classify data stored on file servers. FSRM includes file management tasks that can be configured to automatically expire and delete files based on specific criteria, including last access time. Once FSRM is installed, you can create file management tasks that scan file shares and perform actions like moving files to a different location or deleting them after a specified period of inactivity.
Option A is incorrect because Storage Sense is not the appropriate tool for managing server-side file shares. While Storage Sense can be enabled on Windows Server, it’s designed to manage local system storage rather than shared network resources. It won’t provide the enterprise-level management capabilities needed for file server administration.
Option C is incorrect because FSRM quotas are used to limit the amount of disk space that users or folders can consume. Quotas help prevent users from filling up server storage, but they don’t automatically delete old or unused files based on access time. Quotas are about space limitations, not file lifecycle management.
Option D is incorrect because while Group Policy can be used to configure various Storage Sense settings on client computers, it’s not the appropriate method for managing file expiration on server file shares. Storage Sense policies through Group Policy are designed for endpoint management, not server-side file management tasks that require FSRM functionality.
Question 82
You manage a hybrid environment with on-premises Active Directory Domain Services (AD DS) and Azure Active Directory (Azure AD). You need to ensure that users can reset their passwords in Azure AD and have those changes synchronized back to on-premises AD DS. What feature should you enable?
A) Azure AD Connect password hash synchronization
B) Azure AD Connect pass-through authentication
C) Azure AD password writeback
D) Azure AD self-service password reset only
Answer: C
Explanation:
The correct answer is option C. Azure AD password writeback is the specific feature that enables password changes made in Azure AD to be synchronized back to the on-premises Active Directory Domain Services environment. This feature is essential for providing users with a seamless self-service password reset experience in hybrid environments.
When password writeback is enabled through Azure AD Connect, users can reset their passwords through the Azure AD portal, and those password changes are immediately written back to the on-premises Active Directory. This maintains password consistency across both cloud and on-premises environments and eliminates the need for users to contact IT support for password resets. Password writeback requires Azure AD Premium P1 or P2 licensing and must be explicitly enabled in Azure AD Connect configuration.
Option A is incorrect because password hash synchronization only synchronizes password hashes from on-premises AD DS to Azure AD in one direction (on-premises to cloud). It doesn’t enable writeback functionality that would allow password changes made in Azure AD to be synchronized back to on-premises Active Directory. Password hash sync is used for authentication purposes, not bidirectional password management.
Option B is incorrect because pass-through authentication is an authentication method that validates user credentials directly against on-premises Active Directory without storing password hashes in the cloud. While it provides seamless authentication, it doesn’t include password writeback functionality. Pass-through authentication and password writeback are separate features that serve different purposes.
Option D is incorrect because Azure AD self-service password reset (SSPR) alone only allows users to reset their passwords in Azure AD. Without password writeback enabled, those password changes remain in Azure AD and are not synchronized back to on-premises Active Directory, creating password inconsistency between cloud and on-premises environments.
Question 83
You have a Windows Server 2022 failover cluster that hosts highly available virtual machines. You need to configure the cluster to automatically move virtual machines to other nodes when a node becomes unresponsive. Which feature should you configure?
A) Cluster node drain
B) Cluster node quarantine
C) Virtual machine monitoring
D) Cluster node fault tolerance
Answer: B
Explanation:
The correct answer is option B. Cluster node quarantine is a feature in Windows Server failover clustering that automatically isolates problematic nodes that become unresponsive or exhibit unstable behavior. When a node is quarantined, the cluster automatically moves resources (including virtual machines) from that node to healthy nodes in the cluster, preventing service disruption.
The quarantine feature monitors node health and communication patterns. If a node repeatedly fails health checks or causes cluster instability, it’s automatically placed in quarantine mode. The quarantined node is temporarily removed from active cluster participation, and its workloads are redistributed to other available nodes. This automated response helps maintain cluster reliability and prevents cascading failures that could affect the entire cluster environment.
Option A is incorrect because node drain is a manual or planned operation where you intentionally move all roles and virtual machines from a specific node, typically for maintenance purposes. Draining a node is an administrative action performed when you need to service hardware, apply updates, or troubleshoot issues. It’s not an automatic response to node unresponsiveness but rather a controlled procedure initiated by administrators.
Option C is incorrect because virtual machine monitoring is a feature that tracks the health of specific applications or services running inside virtual machines, not the health of cluster nodes themselves. VM monitoring can restart a virtual machine or move it to another node if the monitored application becomes unresponsive, but it doesn’t address the scenario of an entire node becoming unresponsive.
Option D is incorrect because “cluster node fault tolerance” is not a specific configurable feature in Windows Server failover clustering. While failover clustering inherently provides fault tolerance by allowing workloads to move between nodes during failures, the specific automated mechanism for handling unresponsive nodes is the quarantine feature.
Question 84
You are configuring Windows Admin Center to manage multiple Windows Server 2022 servers. You need to ensure that all administrators can access Windows Admin Center using their Azure AD credentials. What should you configure?
A) Kerberos delegation
B) Azure AD application registration
C) Certificate-based authentication
D) Windows authentication passthrough
Answer: B
Explanation:
The correct answer is option B. To enable Azure AD authentication for Windows Admin Center, you must register Windows Admin Center as an application in Azure Active Directory. This Azure AD application registration creates the trust relationship between Windows Admin Center and Azure AD, allowing users to authenticate using their Azure AD credentials instead of traditional Windows authentication.
The registration process generates an application ID and configures the necessary permissions for Windows Admin Center to interact with Azure AD. Once configured, administrators can sign in to Windows Admin Center using their Azure AD accounts, which is particularly useful in hybrid environments where identity management is centralized in Azure AD. This integration also enables additional security features like conditional access policies and multi-factor authentication for accessing Windows Admin Center.
Option A is incorrect because Kerberos delegation is used for traditional Windows-integrated authentication scenarios where services need to authenticate to other services on behalf of users within an on-premises Active Directory domain. While Kerberos delegation is important for certain Windows Admin Center scenarios (like connecting to managed servers), it doesn’t enable Azure AD authentication for the Windows Admin Center gateway itself.
Option C is incorrect because certificate-based authentication is a method for securing connections and authenticating users or devices using digital certificates rather than passwords. While Windows Admin Center uses HTTPS with certificates to secure communications, certificate-based authentication doesn’t provide the Azure AD integration needed for users to sign in with their Azure AD credentials.
Option D is incorrect because Windows authentication passthrough refers to passing Windows credentials through to managed servers, which is relevant for on-premises Active Directory environments. This doesn’t enable Azure AD authentication for accessing the Windows Admin Center gateway and wouldn’t allow administrators to use their Azure AD credentials for sign-in.
Question 85
You have a Windows Server 2022 server running Hyper-V. You need to implement nested virtualization to run Hyper-V inside a virtual machine for testing purposes. What should you configure on the virtual machine?
A) Enable MAC address spoofing
B) Configure dynamic memory
C) Run Set-VMProcessor with -ExposeVirtualizationExtensions $true
D) Enable SR-IOV on the virtual network adapter
Answer: C
Explanation:
The correct answer is option C. To enable nested virtualization in Hyper-V, you must use the PowerShell cmdlet Set-VMProcessor with the -ExposeVirtualizationExtensions parameter set to $true on the parent host. This command exposes the processor’s virtualization extensions (Intel VT-x or AMD-V) to the virtual machine, allowing it to run Hyper-V and create its own nested virtual machines.
The command syntax is: Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true. This must be executed on the physical Hyper-V host while the target virtual machine is powered off. Additionally, the virtual machine must have at least two virtual processors assigned, and dynamic memory must be disabled. Nested virtualization is particularly useful for testing and development scenarios where you need to experiment with hypervisor configurations without requiring additional physical hardware.
Option A is incorrect because MAC address spoofing is a network configuration that allows a virtual machine to change the source MAC address in outgoing packets. While MAC address spoofing might be needed in some nested virtualization scenarios for networking purposes (such as allowing nested VMs to communicate externally), it’s not the primary requirement for enabling nested virtualization itself. Enabling virtualization extensions is the fundamental requirement.
Option B is incorrect because dynamic memory actually must be disabled for nested virtualization to work properly. Dynamic memory allows Hyper-V to automatically adjust the amount of RAM assigned to virtual machines based on demand, but this feature is incompatible with nested virtualization. Virtual machines hosting nested Hyper-V must have a fixed amount of static memory assigned to ensure stable operation.
Option D is incorrect because SR-IOV (Single Root I/O Virtualization) is a network adapter feature that allows virtual machines to bypass the virtual switch and communicate directly with physical network adapters for improved performance. SR-IOV is unrelated to nested virtualization capabilities and doesn’t enable the processor virtualization extensions required for running Hyper-V inside a virtual machine.
Question 86
You manage a Windows Server 2022 domain controller. You need to configure Active Directory to replicate password changes immediately to all domain controllers in the domain, bypassing the normal replication schedule. Which Active Directory site link option should you configure?
A) Change Notify
B) Urgent Replication
C) Priority Replication
D) Immediate Sync
Answer: A
Explanation:
The correct answer is option A. Change Notify (also known as notification-based replication) is the Active Directory site link option that triggers immediate replication of certain critical changes, including password changes, account lockouts, and security-related modifications. When Change Notify is enabled on a site link, domain controllers immediately notify their replication partners about these urgent changes rather than waiting for the normal replication schedule.
By default, Change Notify is enabled for site links within the same site but disabled for site links between different sites. Password changes are classified as urgent replication traffic, meaning they’re automatically replicated immediately even on schedule-based site links when certain conditions are met. However, explicitly enabling Change Notify ensures that domain controllers proactively push these critical changes to replication partners, reducing the time window for password synchronization issues and improving security.
Option B is incorrect because while “urgent replication” is a concept in Active Directory (certain changes like password modifications are treated as urgent), it’s not a configurable site link option. Urgent replication is automatically triggered for specific types of changes, but the mechanism that enables this behavior for site links is the Change Notify option, not a separate “Urgent Replication” setting.
Option C is incorrect because “Priority Replication” is not a standard Active Directory site link configuration option. While Active Directory does prioritize certain types of replication traffic (like urgent changes), there isn’t a specific site link setting called Priority Replication. The Change Notify option is the actual mechanism that enables prioritized, immediate replication of critical changes.
Option D is incorrect because “Immediate Sync” is not a valid Active Directory site link option. The terminology used in Active Directory for triggering immediate replication is Change Notify. While you can manually force replication between domain controllers using the repadmin tool, there’s no site link configuration option called Immediate Sync.
Question 87
You have a Windows Server 2022 server configured as a DHCP server. You need to implement DHCP failover to provide high availability. The solution must ensure that both DHCP servers can actively assign IP addresses simultaneously. Which DHCP failover mode should you configure?
A) Hot standby mode
B) Load balance mode
C) Active-passive mode
D) Cluster mode
Answer: B
Explanation:
The correct answer is option B. Load balance mode is the DHCP failover configuration that allows both DHCP servers to actively assign IP addresses to clients simultaneously. In this mode, both servers share the same IP address scope, and client requests are distributed between the servers based on a configurable percentage (default is 50/50 split).
Load balance mode provides both high availability and load distribution. When a client broadcasts a DHCP discover request, both servers receive it, but they use an algorithm to determine which server should respond based on the configured load distribution percentage. If one server becomes unavailable, the other server automatically takes over the entire workload, ensuring continuous DHCP service. This mode is ideal when you want to maximize resource utilization and distribute the DHCP workload across multiple servers.
Option A is incorrect because hot standby mode is a different DHCP failover configuration where one server acts as the primary (active) server handling all DHCP requests, while the secondary server remains in standby mode and only becomes active if the primary server fails. In hot standby mode, both servers cannot actively assign IP addresses simultaneously—only the primary server serves clients under normal operation.
Option C is incorrect because “active-passive mode” is essentially another term for hot standby mode in DHCP failover configurations, not a separate option. In active-passive configurations, one server actively serves clients while the other remains passive until a failover occurs. This doesn’t meet the requirement of having both servers actively assign IP addresses simultaneously.
Option D is incorrect because “cluster mode” is not a valid DHCP failover mode in Windows Server. While you could theoretically implement DHCP in a failover cluster, the native DHCP failover feature in Windows Server uses either load balance mode or hot standby mode. The question specifically asks about DHCP failover modes, and the correct answer for simultaneous active assignment is load balance mode.
Question 88
You are implementing Windows Server Update Services (WSUS) in your organization. You need to configure WSUS to automatically approve critical updates for installation. Which WSUS feature should you configure?
A) Automatic Approvals rule
B) Synchronization schedule
C) Update classifications
D) Computer groups
Answer: A
Explanation:
The correct answer is option A. Automatic Approvals is a feature in WSUS that allows administrators to create rules for automatically approving specific types of updates for designated computer groups. By configuring an Automatic Approvals rule, you can specify that critical updates (or other update classifications) should be automatically approved for installation without requiring manual intervention for each update.
To configure this, you create an Automatic Approvals rule in the WSUS console that specifies the update classification (in this case, Critical Updates), the target computer groups, and the approval action (Install). Once configured, any critical updates that are synchronized to the WSUS server will automatically be approved according to the rule parameters. This automation significantly reduces administrative overhead while ensuring that important security updates are deployed promptly.
Option B is incorrect because the synchronization schedule determines when the WSUS server contacts Microsoft Update to download new update metadata and files. While synchronization is necessary to obtain updates, it doesn’t automatically approve updates for installation. Synchronization only downloads update information to your WSUS server; you still need to configure approval rules to make those updates available to client computers.
Option C is incorrect because update classifications are categories used to organize different types of updates (such as Critical Updates, Security Updates, Definition Updates, etc.). While you can configure which classifications WSUS should synchronize from Microsoft Update, selecting classifications doesn’t automatically approve updates for installation. Classifications are used as criteria within Automatic Approvals rules, but they don’t provide approval functionality by themselves.
Option D is incorrect because computer groups are organizational containers in WSUS used to target updates to specific sets of computers. While computer groups are essential for managing which computers receive which updates, creating computer groups alone doesn’t automatically approve updates. Computer groups are typically used in conjunction with Automatic Approvals rules to specify which groups should receive automatically approved updates.
Question 89
You have a Windows Server 2022 file server that hosts user home directories. You need to implement a solution that allows users to recover accidentally deleted files without administrator intervention. What should you implement?
A) Windows Server Backup
B) Shadow Copies for Shared Folders
C) File History
D) Azure Backup
Answer: B
Explanation:
The correct answer is option B. Shadow Copies for Shared Folders (also known as Previous Versions) is a Windows Server feature that creates point-in-time snapshots of files and folders on shared network locations. When enabled, users can right-click a file or folder from a network share, select “Restore previous versions,” and recover earlier versions of files or restore deleted files without requiring administrator assistance.
Shadow Copies uses the Volume Shadow Copy Service (VSS) to create snapshots according to a configured schedule (typically twice daily by default). These snapshots are stored on the same volume and consume minimal space initially, using copy-on-write technology. Users access previous versions directly through Windows Explorer by right-clicking a file or folder and selecting the “Previous Versions” tab. This self-service capability empowers users to recover their own data quickly, reducing help desk calls and minimizing data loss.
Option A is incorrect because Windows Server Backup is a full server or volume backup solution designed for disaster recovery and system restoration. While Windows Server Backup can protect user data, recovering files typically requires administrator intervention to restore from backup media. It’s not designed for self-service file recovery by end users and doesn’t provide the quick, user-accessible previous versions functionality needed for this scenario.
Option C is incorrect because File History is a Windows client feature (available in Windows 10 and Windows 11) that automatically backs up files from a user’s local folders to an external drive or network location. File History is not a server-side feature and doesn’t provide snapshot-based recovery for network file shares. It’s designed for protecting data on individual client computers rather than shared server storage.
Option D is incorrect because while Azure Backup can be configured to back up Windows Server file shares (including on-premises servers via the MARS agent or Azure File Sync), recovering files from Azure Backup typically requires administrator involvement or access to the Azure portal. Azure Backup is an excellent cloud-based backup solution but doesn’t provide the immediate self-service recovery experience that Shadow Copies offers for network shares.
Question 90
You manage a Windows Server 2022 environment with multiple file servers. You need to implement a solution that maintains multiple synchronized copies of files across servers and allows automatic failover if one server becomes unavailable. Which feature should you implement?
A) Distributed File System Replication (DFSR)
B) Storage Replica
C) File Server Resource Manager
D) Work Folders
Answer: A
Explanation:
The correct answer is option A. Distributed File System Replication (DFSR) is the Windows Server feature specifically designed to replicate folders between servers and provide automatic failover for file shares. DFSR uses a multi-master replication engine that efficiently synchronizes files and folders across multiple servers, maintaining consistency while minimizing bandwidth usage through compression and differential replication.
When combined with DFS Namespaces, DFSR provides transparent failover capabilities. Users access files through a DFS namespace path, and if one server becomes unavailable, clients are automatically redirected to another server hosting a replica of the same data. DFSR handles conflicts intelligently and provides scheduling options to control when replication occurs. This makes DFSR ideal for scenarios requiring high availability of file shares with automatic failover between geographically distributed locations or within the same datacenter.
Option B is incorrect because Storage Replica is a block-level replication technology designed primarily for disaster recovery scenarios and storage failover in server clusters. While Storage Replica provides synchronous or asynchronous replication between volumes, it’s typically used for entire volume replication rather than selective folder synchronization. Storage Replica doesn’t provide the same automatic client failover capabilities for file shares that DFSR offers when combined with DFS Namespaces.
Option C is incorrect because File Server Resource Manager (FSRM) is a management tool for classifying, managing, and monitoring file server storage. FSRM provides features like quotas, file screening, storage reports, and file classification, but it doesn’t provide replication or failover capabilities. FSRM helps you manage what’s stored on file servers but doesn’t synchronize or replicate data between servers.
Option D is incorrect because Work Folders is a feature that allows users to sync their work files across multiple devices (computers, tablets, phones) while maintaining corporate control over the data. Work Folders is designed for end-user file synchronization scenarios, similar to consumer services like OneDrive, but it’s not intended for server-to-server replication or automatic failover between file servers.
Question 91
You have a Windows Server 2022 server running the Network Policy Server (NPS) role. You need to configure NPS to authenticate wireless clients using certificates. Which authentication method should you configure in the network policy?
A) PEAP-MS-CHAPv2
B) EAP-TLS
C) PAP
D) MS-CHAPv2
Answer: B
Explanation:
The correct answer is option B. EAP-TLS (Extensible Authentication Protocol-Transport Layer Security) is the authentication method that uses certificates for both client and server authentication. When configuring NPS for certificate-based wireless authentication, EAP-TLS provides the strongest security by requiring both the RADIUS server (NPS) and the wireless clients to present valid digital certificates during the authentication process.
With EAP-TLS, the NPS server presents its server certificate to prove its identity to clients, and clients present their user or computer certificates to prove their identities to the server. This mutual authentication using certificates eliminates the need for password-based authentication and provides the highest level of security for wireless networks. EAP-TLS is widely considered the gold standard for 802.1X wireless authentication and is commonly used in enterprise environments with a Public Key Infrastructure (PKI) in place.
Option A is incorrect because PEAP-MS-CHAPv2 (Protected Extensible Authentication Protocol with Microsoft Challenge Handshake Authentication Protocol version 2) is a password-based authentication method, not certificate-based for client authentication. While PEAP does use a certificate on the server side to create an encrypted TLS tunnel, clients authenticate using username and password credentials rather than certificates. PEAP-MS-CHAPv2 is commonly used when certificate deployment to all clients is impractical.
Option C is incorrect because PAP (Password Authentication Protocol) is an outdated, insecure authentication protocol that transmits passwords in clear text. PAP provides no encryption and is not suitable for modern wireless network authentication. It doesn’t support certificate-based authentication and should not be used for securing wireless networks due to its inherent security weaknesses.
Option D is incorrect because MS-CHAPv2 (Microsoft Challenge Handshake Authentication Protocol version 2) is a password-based authentication protocol that doesn’t use certificates for client authentication. While MS-CHAPv2 is more secure than PAP because it doesn’t transmit passwords in clear text, it’s still based on username and password credentials rather than digital certificates. When used alone without PEAP, MS-CHAPv2 has known security vulnerabilities.
Question 92
You are configuring a Windows Server 2022 server as a DNS server. You need to ensure that the DNS server forwards queries for external domains to your ISP’s DNS servers, but resolves queries for your internal domain locally. What type of zone should you create for your internal domain?
A) Secondary zone
B) Stub zone
C) Primary zone
D) Conditional forwarder
Answer: C
Explanation:
The correct answer is option C. A primary zone is the authoritative, read-write copy of a DNS zone where you create and manage DNS records for your internal domain. When you create a primary zone on your DNS server for your internal domain (such as contoso.local or contoso.com), that server becomes authoritative for that domain and can resolve queries for resources within that domain locally.
Creating a primary zone for your internal domain allows you to maintain full control over DNS records, add new hosts, create service records, and manage all aspects of name resolution for your organization. When combined with DNS forwarders configured to point to your ISP’s DNS servers, your DNS server will resolve internal queries using the primary zone and forward external queries to the ISP’s servers. If you’re using Active Directory, you can create an Active Directory-integrated primary zone for additional benefits like secure dynamic updates and multi-master replication.
Option A is incorrect because a secondary zone is a read-only copy of a primary zone that receives zone data through zone transfers from a primary DNS server. Secondary zones are used for redundancy and load distribution but cannot be directly edited. If you want your server to locally resolve and manage records for your internal domain, you need a primary zone where you can create and modify records, not a secondary zone.
Option B is incorrect because a stub zone contains only the essential DNS records needed to identify the authoritative DNS servers for a zone (NS records and necessary A records). Stub zones don’t contain the full zone database and can’t resolve queries for hosts within the domain—they only help the DNS server locate the authoritative servers for that domain. A stub zone wouldn’t allow your server to resolve internal queries locally.
Option D is incorrect because a conditional forwarder is a configuration that forwards queries for a specific domain to designated DNS servers. While conditional forwarders are useful for directing queries for specific external domains to particular DNS servers (such as partner organizations’ DNS servers), they don’t create a local zone database. You need a primary zone to store and resolve records for your internal domain locally.
Question 93
You have a Windows Server 2022 file server with several shared folders. You need to ensure that when users exceed their assigned storage quota, they receive a warning email, but are still able to save files. Which type of quota should you configure using File Server Resource Manager?
A) Hard quota
B) Soft quota
C) Passive quota
D) Active quota
Answer: B
Explanation:
The correct answer is option B. A soft quota in File Server Resource Manager (FSRM) is a monitoring threshold that triggers notifications (such as email warnings, event log entries, or reports) when users exceed their assigned storage limit, but does not prevent users from continuing to save files beyond the quota limit. Soft quotas are ideal for monitoring storage usage and warning users about excessive consumption while maintaining productivity.
When you configure a soft quota, you can set up various notification actions that occur at specified threshold percentages (for example, at 85%, 95%, and 100% of the quota limit). These notifications can include sending emails to users and administrators, logging events, running commands or scripts, or generating storage reports. The key characteristic of soft quotas is that they’re informational and educational rather than restrictive—users receive warnings but retain the ability to save additional files.
Option A is incorrect because a hard quota strictly enforces the storage limit and prevents users from saving any additional files once they reach the quota threshold. When users attempt to save files after reaching a hard quota, they receive an “insufficient disk space” error, and the save operation fails. Hard quotas are used when you need to strictly control storage consumption, but they don’t meet the requirement of allowing users to continue saving files after exceeding the limit.
Option C is incorrect because “passive quota” is not a standard term or quota type in File Server Resource Manager. FSRM uses the terminology “soft quota” and “hard quota” to distinguish between monitoring quotas (soft) and enforcing quotas (hard). While a soft quota could be considered “passive” in the sense that it doesn’t actively block file operations, the correct FSRM terminology is “soft quota.”
Option D is incorrect because “active quota” is not a recognized quota type in File Server Resource Manager. The term might suggest an enforcing quota, which would be a hard quota, but this is not the correct terminology used in FSRM. The two quota types available in FSRM are soft quotas (monitoring with warnings) and hard quotas (strict enforcement).
Question 94
You manage an Active Directory domain with Windows Server 2022 domain controllers. You need to implement a solution that prevents users from reusing their last 12 passwords when changing passwords. Where should you configure this setting?
A) Account Policies in Default Domain Policy
B) Local Security Policy on each domain controller
C) Fine-Grained Password Policy
D) User Properties in Active Directory Users and Computers
Answer: A
Explanation:
The correct answer is option A. The password history setting, which determines how many previous passwords are remembered and prevents reuse, is configured in the Account Policies section of the Default Domain Policy (or another Group Policy Object linked at the domain level). Specifically, you’ll find this setting under Computer Configuration > Policies > Windows Settings > Security Settings > Account Policies > Password Policy.
The “Enforce password history” setting allows you to specify how many unique new passwords must be used before an old password can be reused. Setting this value to 12 means the system remembers the user’s last 12 passwords and prevents them from reusing any of those passwords when changing their password. This policy, along with other password policy settings, must be configured at the domain level and applies to all users in the domain by default.
Option B is incorrect because configuring Local Security Policy on individual domain controllers would only affect local accounts on those specific servers, not domain user accounts. Domain-wide password policies must be configured through Group Policy at the domain level. Local Security Policy is only relevant for standalone servers or local account management, not for Active Directory domain users.
Option C is incorrect while it could technically work, but it’s not the standard approach. Fine-Grained Password Policies (also called Password Settings Objects) are used when you need to apply different password policies to different groups of users within the same domain. If you want to apply a uniform password history requirement to all domain users, the standard and most appropriate method is configuring it in the Default Domain Policy, not creating a Fine-Grained Password Policy.
Option D is incorrect because user properties in Active Directory Users and Computers don’t provide an option to configure password history enforcement. Individual user account properties allow you to reset passwords, configure account expiration, and set various account options, but password policy settings like password history are domain-wide policies configured through Group Policy, not on individual user objects.
Question 95
You have a Windows Server 2022 server running Internet Information Services (IIS). You need to configure the web server to automatically redirect all HTTP traffic to HTTPS. What should you configure in IIS?
A) URL Rewrite rule with a redirect action
B) HTTP Response Headers
C) Application Request Routing
D) SSL Settings to require SSL
Answer: A
Explanation:
The correct answer is option A. A URL Rewrite rule with a redirect action is the most effective and flexible method to automatically redirect all HTTP traffic (port 80) to HTTPS (port 443) in IIS. The URL Rewrite module allows you to create rules that match incoming HTTP requests and redirect them to the HTTPS equivalent while preserving the requested path and query string.
To implement this, you install the URL Rewrite module (if not already installed), then create a rule that matches requests coming to HTTP (checking if HTTPS is “off”) and redirects them to the HTTPS version of the same URL using a 301 (permanent redirect) or 302 (temporary redirect) status code. The rule typically uses the pattern “(.)” and redirects to “https://{HTTP_HOST}/{R:1}”. This approach ensures that all HTTP traffic is automatically redirected to HTTPS, providing encryption for all connections.
Option B is incorrect because HTTP Response Headers are used to add custom headers to HTTP responses sent from the server, such as security headers like Strict-Transport-Security, Content-Security-Policy, or X-Frame-Options. While you might add security-related headers as part of your HTTPS implementation, configuring response headers doesn’t redirect HTTP traffic to HTTPS. Response headers are metadata added to responses, not redirection mechanisms.
Option C is incorrect because Application Request Routing (ARR) is an IIS extension used for load balancing, routing, and caching across multiple servers. ARR is designed for reverse proxy scenarios and server farm management, not for redirecting HTTP to HTTPS on a single server. While ARR can be used in complex load-balancing scenarios that involve SSL termination, it’s not the appropriate tool for simple HTTP to HTTPS redirection.
Option D is incorrect because configuring SSL Settings to “Require SSL” in IIS prevents HTTP access to a website by returning a 403 Forbidden error when users attempt to connect via HTTP. While this enforces HTTPS usage, it doesn’t automatically redirect users—instead, it simply blocks HTTP connections with an error. Users who type “http://” in their browser would see an error rather than being seamlessly redirected to the HTTPS version.
Question 96
You are configuring Windows Server 2022 servers in a workgroup environment. You need to implement centralized management of these servers using Windows Admin Center. The solution must not require joining the servers to a domain. What should you configure on the Windows Admin Center gateway server?
A) Kerberos Constrained Delegation
B) CredSSP authentication
C) Certificate-based authentication
D) Azure AD integration
Answer: B
Explanation:
The correct answer is option B. CredSSP (Credential Security Support Provider) authentication is the appropriate method for managing workgroup servers through Windows Admin Center when those servers are not joined to an Active Directory domain. CredSSP allows the gateway to delegate credentials to the target managed servers, which is necessary in workgroup scenarios where Kerberos authentication (used in domain environments) is not available.
When you connect to a workgroup server through Windows Admin Center, you must explicitly provide credentials for that server, and CredSSP handles the credential delegation required for remote management operations. While CredSSP is less secure than Kerberos Constrained Delegation because it’s more vulnerable to credential theft if the gateway is compromised), it’s the practical solution for workgroup environments. To use CredSSP, you must enable it on both the Windows Admin Center gateway and the target workgroup servers, typically through Group Policy or PowerShell configuration.
Option A is incorrect because Kerberos Constrained Delegation is an authentication mechanism that requires Active Directory domain membership. Kerberos is the preferred authentication method for domain-joined servers and provides enhanced security by limiting which services can delegate user credentials. However, Kerberos authentication cannot be used in workgroup environments because it depends on Active Directory infrastructure for ticket granting and service principal names. Since the scenario explicitly states the servers are in a workgroup, Kerberos Constrained Delegation is not applicable.
Option C is incorrect because while certificate-based authentication can be used to secure the connection between the browser and Windows Admin Center gateway (via HTTPS), it doesn’t solve the credential delegation problem needed for managing workgroup servers. Certificates secure the communication channel but don’t provide the mechanism for passing user credentials from the gateway to managed servers. You still need CredSSP for credential delegation in workgroup scenarios, even when using certificates for gateway authentication.
Option D is incorrect because Azure AD integration allows users to authenticate to the Windows Admin Center gateway using their Azure Active Directory credentials, but it doesn’t address the challenge of managing workgroup servers that aren’t joined to any directory service. Azure AD integration is useful for gateway access control and can be combined with Azure AD joined servers, but for traditional workgroup servers without any directory service membership, CredSSP remains necessary for credential delegation and remote management.
Question 97
You have a Windows Server 2022 Hyper-V host with multiple virtual machines. You need to configure a virtual switch that allows virtual machines to communicate with each other but prevents them from accessing the physical network. Which type of virtual switch should you create?
A) External virtual switch
B) Internal virtual switch
C) Private virtual switch
D) NAT virtual switch
Answer: C
Explanation:
The correct answer is option C. A private virtual switch in Hyper-V allows communication only between virtual machines connected to that switch on the same Hyper-V host. Virtual machines on a private virtual switch cannot communicate with the host operating system or with external networks. This isolation makes private virtual switches ideal for testing scenarios, security research environments, or any situation where you need virtual machines to communicate with each other while being completely isolated from production networks.
When you create a private virtual switch, Hyper-V creates a software-based network segment that exists only within the Hyper-V environment on that specific host. Virtual machines connected to the private switch can exchange network traffic with each other as if they were connected to a physical switch, but all traffic remains contained within the virtualization layer. This provides maximum isolation while still allowing inter-VM communication for applications that require network connectivity between virtual machines.
Option A is incorrect because an external virtual switch connects to a physical network adapter on the host and provides virtual machines with access to the physical network. Virtual machines connected to an external switch can communicate with other computers on the physical network, access the internet, and interact with external resources. This doesn’t meet the requirement of preventing access to the physical network—in fact, an external switch specifically provides that access.
Option B is incorrect because an internal virtual switch allows communication between virtual machines and the host operating system, but not with the external physical network. While an internal switch does prevent access to external networks, it still allows the host OS to communicate with the virtual machines. The requirement states that VMs should only communicate with each other, and the question doesn’t specify that host communication is needed, making the private switch the more appropriate choice for complete isolation.
Option D is incorrect because a NAT (Network Address Translation) virtual switch allows virtual machines to share the host’s IP address for accessing external networks through NAT. Virtual machines on a NAT switch can access external resources but appear to come from the host’s IP address. NAT switches provide network connectivity to external resources, which directly contradicts the requirement to prevent access to the physical network. NAT switches are typically used when you want to provide internet access without dedicating physical NICs to VMs.
Question 98
You manage a Windows Server 2022 environment with multiple file servers. You need to implement a solution that classifies files based on content and automatically applies encryption to files containing credit card numbers. Which File Server Resource Manager component should you use?
A) File Management Tasks
B) File Classification Infrastructure
C) File Screening
D) Storage Reports
Answer: B
Explanation:
The correct answer is option B. File Classification Infrastructure (FCI) is the File Server Resource Manager component that automatically classifies files based on their content, location, or other properties, and then applies policies or actions based on those classifications. FCI can scan file contents using built-in or custom classification rules to identify sensitive information like credit card numbers, social security numbers, or personally identifiable information.
Once files are classified, FCI can trigger file management tasks that automatically apply encryption, move files to secure locations, set access permissions, or perform other actions. For the scenario of detecting credit card numbers and applying encryption, you would create a content-based classification rule that uses pattern matching (regular expressions) to identify credit card number formats, classify those files appropriately, and then configure a file management task to encrypt files with that classification. This automated approach ensures consistent protection of sensitive data without manual intervention.
Option A is incorrect because while File Management Tasks are used to perform actions on files (such as expiring old files, moving files, or running scripts), they work in conjunction with File Classification Infrastructure rather than replacing it. File Management Tasks can use classification properties to determine which files to process, but they don’t perform the content analysis and classification themselves. You need FCI to classify files based on content before File Management Tasks can act on those classifications.
Option C is incorrect because File Screening is used to control which types of files users can save to specific locations based on file extensions or file groups. File Screening prevents users from saving unauthorized file types (like executable files or media files) but doesn’t analyze file contents or classify files based on what’s inside them. File Screening is about blocking certain file types, not identifying sensitive content within allowed files.
Option D is incorrect because Storage Reports generate informational reports about file server usage, including reports on file types, large files, duplicate files, quota usage, and other storage metrics. Storage Reports help administrators understand storage consumption patterns and identify issues, but they don’t classify files or apply policies based on content. Storage Reports are analytical tools, not classification or policy enforcement mechanisms.
Question 99
You have a Windows Server 2022 server running the Remote Desktop Session Host role. You need to configure the server to automatically disconnect users who have been idle for 30 minutes but keep their sessions active so they can reconnect without losing their work. Which setting should you configure?
A) Set session time limit for active but idle Remote Desktop Services sessions
B) End session when time limits are reached
C) Set time limit for disconnected sessions
D) Set time limit for active Remote Desktop Services sessions
Answer: A
Explanation:
The correct answer is option A. The “Set session time limit for active but idle Remote Desktop Services sessions” Group Policy setting controls how long an active RDS session can remain idle before the system automatically takes action. When you configure this setting to 30 minutes, users who are connected but inactive (no keyboard or mouse input) will have their sessions automatically disconnected after 30 minutes of idle time.
Importantly, disconnecting a session is different from logging off or ending a session. When a session is disconnected due to idle timeout, all applications continue running, and the user’s work is preserved. Users can reconnect to the same session later and resume exactly where they left off. This configuration is found in Group Policy under Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Session Time Limits. This approach balances resource management with user productivity by freeing up active connections while preserving user work.
Option B is incorrect because “End session when time limits are reached” is a separate policy that determines what happens when session time limits expire. When this setting is enabled, sessions are logged off (terminated) rather than simply disconnected when time limits are reached. If you enable this setting, users would lose their work when the 30-minute idle timeout occurs because their sessions would be ended and all applications closed, which doesn’t meet the requirement of keeping sessions active for reconnection.
Option C is incorrect because “Set time limit for disconnected sessions” controls how long a session can remain in a disconnected state before being logged off. This setting applies after a session has already been disconnected (either manually by the user or automatically due to idle timeout). While you might configure this setting as part of a comprehensive session management strategy, it doesn’t control when active idle sessions are automatically disconnected—that’s controlled by the active-but-idle setting.
Option D is incorrect because “Set time limit for active Remote Desktop Services sessions” specifies the maximum duration that any active RDS session can remain connected, regardless of whether the user is actively working or idle. This setting would disconnect or end sessions after 30 minutes of total connection time, even if users are actively working, which would be disruptive and doesn’t align with the requirement to disconnect only idle sessions while preserving work.
Question 100
You are implementing a disaster recovery solution for Windows Server 2022 domain controllers. You need to ensure that you can restore Active Directory objects that were accidentally deleted up to 180 days after deletion without performing an authoritative restore. What should you enable?
A) Active Directory Recycle Bin
B) Active Directory Snapshots
C) Active Directory Backup
D) Tombstone reanimation
Answer: A
Explanation:
The correct answer is option A. Active Directory Recycle Bin is the feature that allows you to restore deleted Active Directory objects with all their attributes intact without requiring authoritative restore procedures or taking domain controllers offline. Once enabled, the Recycle Bin preserves deleted objects for the duration of the deleted object lifetime (180 days by default, which matches your requirement) before they’re permanently removed.
When an object is deleted with the Recycle Bin enabled, it enters a “deleted” state where it’s preserved with all its attributes, group memberships, and properties intact. Administrators can restore these objects using PowerShell cmdlets like Restore-ADObject or the Active Directory Administrative Center GUI. The restoration is quick, complete, and doesn’t require directory services restore mode or authoritative restore operations. The Recycle Bin requires the forest functional level to be Windows Server 2008 R2 or higher, and once enabled, it cannot be disabled.
Option B is incorrect because Active Directory Snapshots are point-in-time copies of the Active Directory database that can be mounted and browsed to recover deleted objects. While snapshots can be useful for recovery, they’re manually created at specific points in time and don’t provide continuous protection for deleted objects. To restore from a snapshot, you must mount the snapshot, identify the deleted object, and perform an authoritative restore, which is more complex than using the Recycle Bin. Snapshots also don’t automatically provide 180 days of retention.
Option C is incorrect because “Active Directory Backup” refers to traditional backup operations using Windows Server Backup or third-party backup solutions to create system state backups that include the Active Directory database. While backups are essential for disaster recovery, restoring deleted objects from backup requires taking a domain controller offline, booting into Directory Services Restore Mode, restoring the system state, and performing an authoritative restore to replicate the recovered objects. This process is much more complex than using the Recycle Bin.
Option D is incorrect because tombstone reanimation is a manual technique used before the Active Directory Recycle Bin was introduced. When objects are deleted without the Recycle Bin enabled, they become tombstones with most attributes stripped away. Tombstone reanimation involves using low-level tools like ldp.exe or ADSIEdit to restore these tombstones, but since most attributes are lost, significant manual work is required to reconstruct the object. Tombstone lifetime is also typically only 60-180 days, and the process is error-prone and unsupported compared to the Recycle Bin.