Visit here for our full Microsoft AZ-800 exam dumps and practice test questions.
Question 121
You have a Windows Server 2022 domain controller in your Active Directory environment. You need to configure the domain controller to use a specific DNS server for name resolution instead of using itself. What should you configure?
A) DNS client settings in network adapter properties
B) DNS forwarders in DNS Manager
C) Root hints in DNS Manager
D) Conditional forwarders in DNS Manager
Answer: A
Explanation:
The correct answer is option A. To configure a domain controller to use a specific DNS server for its own name resolution needs, you must modify the DNS client settings in the network adapter properties on the domain controller itself. While domain controllers typically run the DNS Server service and point to themselves or other domain controllers for DNS resolution, there are scenarios where you might need to configure them to use different DNS servers, such as during migration or in specific network architectures.
To configure this, you access the network adapter properties on the domain controller, open the Internet Protocol Version 4 (TCP/IP) properties, and specify the IP address of the preferred DNS server you want the domain controller to use. The domain controller will then send its DNS queries to the specified server rather than querying itself. However, it’s important to note that best practices recommend domain controllers should point to themselves or other domain controllers running DNS for reliable Active Directory name resolution. Pointing to non-AD-integrated DNS servers can cause replication and authentication issues. This configuration should only be implemented in carefully planned scenarios with full understanding of the implications.
Option B is incorrect because DNS forwarders are configured on the DNS Server service itself, not on the DNS client settings of the server. Forwarders determine where the DNS Server service sends queries for domains it cannot resolve authoritatively. When you configure forwarders, you’re telling the DNS server where to send external queries that it receives from clients, not configuring where the server operating system itself sends its own DNS queries. Forwarders affect how the DNS service resolves queries on behalf of clients, not how the server resolves its own name resolution needs as a client.
Option C is incorrect because root hints are a list of DNS root servers that a DNS server uses to resolve queries when it doesn’t have the answer cached and isn’t configured to use forwarders. Root hints enable DNS servers to perform iterative queries starting from the root of the DNS namespace. Like forwarders, root hints are a DNS Server service configuration that affects how the server resolves queries for its clients, not how the server operating system performs its own name resolution. Modifying root hints doesn’t change which DNS server the domain controller uses for its own client queries.
Option D is incorrect because conditional forwarders are DNS Server service configurations that direct queries for specific domains to designated DNS servers. For example, you might configure a conditional forwarder to send all queries for partner.com to that partner’s DNS servers. Conditional forwarders provide granular control over query routing for different namespaces but only affect how the DNS Server service processes queries from clients. They don’t control where the domain controller operating system sends its own DNS queries for name resolution. To change the DNS client behavior of the server itself, you must modify the network adapter DNS settings.
Question 122
You manage a Windows Server 2022 environment with multiple Hyper-V hosts. You need to implement a solution that allows virtual machines to move between hosts based on resource availability without requiring shared storage. Which feature should you configure?
A) Live Migration with SMB storage
B) Storage Migration
C) Live Migration without shared storage (Shared Nothing Live Migration)
D) Quick Migration
Answer: C
Explanation:
The correct answer is option C. Shared Nothing Live Migration, also known as Live Migration without shared storage, allows you to perform live migration of running virtual machines between Hyper-V hosts that don’t share storage infrastructure. This feature, introduced in Windows Server 2012 and enhanced in later versions, transfers both the virtual machine’s memory state and storage files simultaneously from the source host to the destination host over the network while the VM continues running.
During Shared Nothing Live Migration, the VM’s memory pages are transferred to the destination host while simultaneously copying the virtual hard disk files and VM configuration. The process uses an intelligent algorithm that minimizes downtime by tracking which memory pages change during the migration and transferring those changes incrementally. This allows VMs to move between standalone Hyper-V hosts or between hosts using different storage systems without requiring SAN, cluster shared volumes, or other shared storage infrastructure. The feature is particularly valuable for environments without shared storage, such as branch offices, small deployments, or situations where you’re migrating between different storage platforms.
Option A is incorrect because while Live Migration with SMB storage does allow you to use network-based storage (SMB 3.0 file shares) instead of traditional SAN storage, it still requires shared storage that both source and destination hosts can access. The virtual machine files reside on the SMB share, and both hosts connect to this shared location during migration. This provides flexibility in storage architecture but doesn’t address the requirement of moving VMs between hosts without shared storage. SMB storage is a shared storage solution delivered over the network rather than through block-level protocols like iSCSI or Fibre Channel.
Option B is incorrect because Storage Migration specifically refers to moving a virtual machine’s storage files (virtual hard disks, configuration files) while the VM continues to run, but it doesn’t necessarily move the VM between hosts. Storage Migration allows you to relocate VM files from one storage location to another on the same host or between storage locations accessible to that host. While Storage Migration is a component of Shared Nothing Live Migration, the term “Storage Migration” alone doesn’t describe the complete process of moving both the running VM and its storage between hosts without shared storage.
Option D is incorrect because Quick Migration is an older migration technology that briefly pauses virtual machines, saves their memory state to disk, moves the saved state and configuration files to the destination host, and then resumes the VM on the new host. Quick Migration requires shared storage between hosts because it doesn’t transfer storage files—it only moves the VM configuration and saved state. Quick Migration also causes VM downtime during the save and restore process, typically ranging from several seconds to minutes depending on the VM’s memory size. This doesn’t meet modern requirements for seamless migration without shared storage.
Question 123
You have a Windows Server 2022 server running Internet Information Services (IIS) with multiple web applications. You need to implement a solution that automatically redirects HTTP requests to HTTPS and adds security headers to all responses. What should you configure?
A) URL Rewrite rules and HTTP Response Headers
B) Request Filtering and MIME types
C) Application Request Routing and Output Caching
D) Failed Request Tracing and Custom Errors
Answer: A
Explanation:
The correct answer is option A. URL Rewrite rules combined with HTTP Response Headers provide the complete solution for both redirecting HTTP to HTTPS and adding security headers to responses. URL Rewrite allows you to create rules that match incoming HTTP requests and redirect them to HTTPS, while HTTP Response Headers enables you to add security-related headers to all outgoing responses from the web server.
To implement HTTP to HTTPS redirection, you install the URL Rewrite module (if not already present), create a rewrite rule that matches requests where HTTPS is off, and configure a redirect action to https://{HTTP_HOST}/{R:1} with a 301 permanent redirect status. For security headers, you navigate to HTTP Response Headers in IIS Manager and add headers such as Strict-Transport-Security (HSTS) to enforce HTTPS, X-Content-Type-Options to prevent MIME sniffing, X-Frame-Options to prevent clickjacking, Content-Security-Policy to control resource loading, and X-XSS-Protection for cross-site scripting protection. This combination provides comprehensive security enhancement by ensuring encrypted connections and implementing defense-in-depth security controls through proper headers.
Option B is incorrect because Request Filtering is used to block malicious requests based on criteria like request limits, URL sequences, file extensions, and HTTP verbs, while MIME types define how the server handles different file extensions. Request Filtering helps prevent attacks but doesn’t redirect HTTP to HTTPS or add security headers to responses. MIME types control content type mapping (like associating .jpg with image/jpeg) but don’t provide redirection or header injection capabilities. Neither feature addresses the requirements in the question.
Option C is incorrect because Application Request Routing (ARR) is used for load balancing, routing requests to server farms, and implementing reverse proxy scenarios, while Output Caching stores rendered pages in memory to improve performance. ARR excels at distributing requests across multiple backend servers but isn’t designed for HTTP-to-HTTPS redirection within a single server. Output Caching improves performance by reducing processing overhead but doesn’t add security headers or perform redirects. These features serve different purposes than the security requirements specified.
Option D is incorrect because Failed Request Tracing is a diagnostic tool that logs detailed information about requests that meet failure conditions (such as status codes, time taken, or specific errors), and Custom Errors allows you to customize error pages displayed to users for different HTTP status codes. Failed Request Tracing is for troubleshooting and diagnosis, not for modifying request behavior or adding headers. Custom Errors controls error page presentation but doesn’t redirect traffic or add security headers to successful responses.
Question 124
You manage a Windows Server 2022 environment with Active Directory Domain Services. You need to implement a solution that prevents users from logging on to domain computers outside of their assigned geographic location. What should you configure?
A) Authentication Policies and Silos
B) Active Directory Sites and Services
C) Fine-Grained Password Policies
D) Dynamic Access Control
Answer: A
Explanation:
The correct answer is option A. Authentication Policies and Authentication Policy Silos, introduced in Windows Server 2012 R2, provide advanced access control that can restrict where users and computers can authenticate based on various criteria including user groups, computer groups, and authentication mechanisms. You can create authentication policies that specify allowed authentication sources and targets, effectively controlling from which locations or computers specific users can log on.
To implement geographic-based logon restrictions, you would create authentication policies that define which computers in specific locations (organized by Active Directory groups or organizational units representing geographic regions) are allowed authentication sources for specific user groups. For example, you could create an authentication policy that allows users in the “US-Employees” group to authenticate only from computers in the “US-Computers” group, effectively restricting logons to US-based systems. Authentication Policy Silos then group related users, computers, and service accounts together with specific policies. While this approach requires careful planning and organization of your directory structure, it provides granular control over authentication scenarios and helps implement location-based access restrictions.
Option B is incorrect because Active Directory Sites and Services is used to manage the physical topology of Active Directory, defining sites based on IP subnets, configuring replication between domain controllers, and optimizing authentication by directing clients to nearby domain controllers. While sites represent physical locations and can influence which domain controller handles authentication, Sites and Services doesn’t provide mechanisms to restrict users from authenticating at specific locations. Sites optimize replication and service location but don’t enforce authentication restrictions based on user-to-location relationships.
Option C is incorrect because Fine-Grained Password Policies (Password Settings Objects) allow you to apply different password and account lockout policies to different groups of users within a domain. Fine-Grained Password Policies control password complexity, length, age, history, and lockout behavior for specific user populations, but they don’t provide any functionality for restricting authentication based on location or computer identity. Password policies govern credential requirements, not where those credentials can be used for authentication.
Option D is incorrect because Dynamic Access Control is a feature for controlling access to files and folders based on sophisticated claims-based conditions including user attributes, device properties, and resource classifications. DAC enables you to create conditional access policies for file system resources (such as “only allow access to documents classified as ‘Confidential’ from managed devices”), but it operates at the file access level, not at the authentication level. DAC controls what users can access after they’ve authenticated, not where they can authenticate from.
Question 125
You have a Windows Server 2022 file server with Data Deduplication enabled. You need to optimize the deduplication process to run during business hours without impacting user experience. What should you configure?
A) Deduplication schedule and throughput optimization
B) Storage QoS policies
C) File Server Resource Manager quotas
D) VSS shadow copy schedule
Answer: A
Explanation:
The correct answer is option A. Data Deduplication includes configurable schedules and throughput optimization settings that control when deduplication jobs run and how much system resources they consume. To optimize deduplication for business hours without impacting users, you would configure the deduplication schedule to run continuously or during specific times and enable throughput optimization (previously called priority optimization) which limits the resources deduplication uses to minimize performance impact.
You configure these settings using PowerShell with the Set-DedupSchedule cmdlet to modify or create deduplication job schedules, and Set-DedupVolume with the -MinimumFileAgeDays parameter to control which files are eligible for deduplication. Throughput optimization mode reduces the CPU and disk I/O priority of deduplication processes, allowing them to run continuously while user workloads take precedence. You might configure optimization jobs to run during business hours with throughput optimization enabled, while running garbage collection and scrubbing jobs during off-hours when they won’t compete with user activity. This balanced approach maintains deduplication efficiency while ensuring acceptable user experience.
Option B is incorrect because Storage QoS (Quality of Service) policies control and guarantee minimum and maximum IOPS for virtual machine workloads in Hyper-V environments using Scale-Out File Server or Storage Spaces Direct. Storage QoS prevents noisy neighbor problems by limiting how much storage performance individual VMs can consume and ensuring minimum performance guarantees. While Storage QoS manages storage performance, it’s designed for virtualization scenarios and doesn’t specifically control or optimize Data Deduplication processes. Storage QoS and Data Deduplication serve different purposes in storage management.
Option C is incorrect because File Server Resource Manager quotas limit the amount of disk space that users or folders can consume on file servers. Quotas can be hard limits that prevent additional file saves when exceeded or soft limits that trigger warnings. FSRM quotas manage storage capacity consumption but don’t control the scheduling or performance impact of Data Deduplication processes. Quotas are about space allocation and limits, while deduplication scheduling is about process timing and resource utilization management.
Option D is incorrect because VSS (Volume Shadow Copy Service) shadow copy schedules control when point-in-time snapshots of volumes are created for the Previous Versions feature. Shadow copies allow users to recover previous versions of files or restore accidentally deleted files without administrator intervention. While shadow copy scheduling is important for data protection, it’s unrelated to Data Deduplication processing. Shadow copies and deduplication are independent features that can coexist on the same volumes but are configured and scheduled separately.
Question 126
You manage a Windows Server 2022 environment with multiple servers in a workgroup configuration. You need to implement centralized Windows Update management for these servers. What should you deploy?
A) Windows Server Update Services (WSUS) in standalone mode
B) Configuration Manager with Software Update Point
C) Azure Update Management
D) Group Policy for Windows Update settings
Answer: A
Explanation:
The correct answer is option A. Windows Server Update Services (WSUS) can be deployed in standalone mode to manage updates for workgroup servers that are not domain-joined. In standalone mode, WSUS doesn’t rely on Active Directory Group Policy for client configuration. Instead, you configure workgroup computers to point to the WSUS server by modifying local registry settings or using local Group Policy on each server.
To implement this, you install the WSUS role on a Windows Server, configure it to synchronize updates from Microsoft Update or upstream WSUS servers, and organize updates into computer groups. On each workgroup server, you configure Windows Update client settings by editing the local Group Policy (gpedit.msc) or directly modifying registry keys under HKLM\Software\Policies\Microsoft\Windows\WindowsUpdate to specify the WSUS server URL. While this requires more manual configuration than domain-based WSUS deployment, it provides centralized update management, approval workflows, reporting, and controlled update deployment for workgroup environments. WSUS in standalone mode gives you the same update management capabilities as domain-integrated WSUS but requires individual client configuration.
Option B is incorrect because while Microsoft Configuration Manager (formerly SCCM) with a Software Update Point provides comprehensive update management and much more, it’s a more complex and expensive solution typically designed for domain-joined environments. Configuration Manager requires significant infrastructure including SQL Server, management points, and distribution points. Although Configuration Manager can manage workgroup computers using client certificates or other authentication methods, it’s overly complex for basic workgroup update management. WSUS provides sufficient update management capabilities for workgroup servers without the overhead and complexity of Configuration Manager.
Option C is incorrect while Azure Update Management is a cloud-based solution that can manage updates for both Azure VMs and on-premises servers (including workgroup servers through the Log Analytics agent), it requires Azure subscriptions, Log Analytics workspace, and ongoing cloud service costs. For simple workgroup environments, especially those without existing Azure infrastructure or where cloud connectivity is limited, Azure Update Management represents unnecessary complexity and cost. WSUS provides on-premises update management without cloud dependencies. However, Azure Update Management becomes more attractive for hybrid environments already leveraging Azure services.
Option D is incorrect because Group Policy for Windows Update settings relies on Active Directory Group Policy Objects (GPOs) to distribute configurations to domain-joined computers. Workgroup servers are not domain members and cannot receive or apply domain-based Group Policies. While you can configure Windows Update settings using local Group Policy on individual workgroup servers, this doesn’t provide centralized management—you would need to configure each server individually. The question asks for centralized management, which requires a central update server like WSUS that workgroup computers can be configured to use.
Question 127
You have a Windows Server 2022 server running the DHCP Server role. You need to configure DHCP failover between two DHCP servers to provide high availability. The solution must ensure that both servers actively service client requests. What failover mode should you configure?
A) Hot standby mode with 50/50 split
B) Load balance mode
C) Active-passive configuration
D) Clustered DHCP
Answer: B
Explanation:
The correct answer is option B. Load balance mode is the DHCP failover configuration that allows both DHCP servers to actively service client requests simultaneously. In load balance mode, both servers share the responsibility for responding to DHCP requests based on a configurable percentage split (default 50/50), providing both high availability and load distribution across the two servers.
When you configure DHCP failover in load balance mode, both servers maintain synchronized copies of the lease database and scope configuration. Client DHCP requests are responded to by both servers based on the configured load distribution percentage—for example, with a 50/50 split, each server responds to approximately half the requests. If one server becomes unavailable, the surviving server automatically assumes responsibility for the entire client population, providing seamless failover. To configure this, you use the Configure Failover wizard in DHCP Manager, select load balance mode, specify the load distribution percentage, configure the Maximum Client Lead Time (MCLT) for synchronization, and set a shared secret for secure communication between servers.
Option A is incorrect because hot standby mode is a different failover configuration where one server operates as the primary active server handling all client requests under normal circumstances, while the partner server remains in standby mode and only becomes active if the primary fails. In hot standby mode, only one server actively services clients at a time, which doesn’t meet the requirement of both servers actively servicing client requests. Hot standby is appropriate when you want one server to handle all requests during normal operation with another server ready for failover, but it doesn’t provide active load distribution.
Option C is incorrect because active-passive configuration is essentially another term for hot standby mode in DHCP failover contexts. In an active-passive setup, the active server handles all DHCP requests while the passive server waits in standby mode, only taking over if the active server fails. This configuration provides high availability through failover capability but doesn’t meet the requirement of both servers actively servicing client requests simultaneously. Active-passive provides redundancy but not active load balancing between servers.
Option D is incorrect because while you can configure DHCP in a Windows Failover Cluster for high availability, clustered DHCP operates as an active-passive configuration where the DHCP service runs on one cluster node at a time. If that node fails, the cluster fails over the DHCP service to another node, but only one node actively provides DHCP services at any given time. Clustered DHCP provides high availability but not simultaneous active servicing by multiple servers. Additionally, DHCP failover (introduced in Windows Server 2012) provides easier configuration and better functionality than clustered DHCP for most scenarios.
Question 128
You manage a Windows Server 2022 environment with Remote Desktop Services deployed in a session-based deployment. You need to configure load balancing across multiple RD Session Host servers. What should you implement?
A) RD Connection Broker with session host collection
B) Network Load Balancing (NLB) cluster
C) DNS round robin
D) Failover Clustering
Answer: A
Explanation:
The correct answer is option A. The Remote Desktop Connection Broker (RD Connection Broker) is the core component for managing and load balancing user connections across multiple RD Session Host servers in a Remote Desktop Services deployment. The Connection Broker maintains session state information, determines which session host has available capacity, and directs new connections to appropriate servers based on load balancing algorithms.
When you create an RD Session Host collection through Server Manager and add multiple session host servers to that collection, the Connection Broker automatically handles load balancing by distributing new user sessions across available session hosts. The Connection Broker tracks active sessions, server load, and available resources, ensuring optimal distribution of users. It also handles session reconnection, allowing users who disconnect to reconnect to their existing sessions on the correct server. The Connection Broker can be configured for high availability by deploying it in a SQL Server-based or Windows Server-based high availability configuration. This integrated RDS load balancing provides session awareness and intelligent distribution that simple network-based load balancing cannot offer.
Option B is incorrect because while Network Load Balancing can distribute network traffic across multiple servers at the network level, it doesn’t provide session-aware load balancing for RDS. NLB distributes connections based on network algorithms but has no awareness of RDS session state, which user sessions are active on which servers, or how to reconnect users to their existing sessions. Using NLB alone for RDS could result in users being connected to different session hosts each time they reconnect, potentially creating multiple sessions instead of reconnecting to existing sessions. NLB is network-layer load balancing, while RDS requires application-aware load balancing provided by Connection Broker.
Option C is incorrect because DNS round robin distributes client connections by rotating through multiple IP addresses in DNS query responses, providing very basic load distribution. However, DNS round robin has no awareness of server load, capacity, or session state. It cannot track which users have sessions on which servers or reconnect users to their existing sessions. DNS round robin also doesn’t handle server failures gracefully—clients may be directed to failed servers until DNS cache entries expire. For RDS, which requires session persistence and intelligent connection management, DNS round robin is inadequate compared to the Connection Broker’s sophisticated load balancing capabilities.
Option D is incorrect because Failover Clustering is designed for high availability of services that run on one node at a time with automatic failover to other nodes during failures. While you can use failover clustering to provide high availability for the RD Connection Broker role itself, clustering doesn’t provide load balancing across multiple RD Session Host servers. Session hosts need to run simultaneously and serve users concurrently, not fail over to each other. The Connection Broker provides the load balancing functionality across session hosts, and clustering would only be used to make the Connection Broker itself highly available, not to load balance the session hosts.
Question 129
You have a Windows Server 2022 DNS server hosting several DNS zones. You need to configure the DNS server to respond to queries only from clients on specific subnets while still allowing zone transfers to secondary DNS servers. What should you configure?
A) DNS policies with query resolution policies based on client subnet
B) DNS server recursion settings
C) Zone transfer settings on each zone
D) DNS server listening addresses
Answer: A
Explanation:
The correct answer is option A. DNS policies in Windows Server 2016 and later provide granular control over how DNS servers respond to queries based on various criteria including client subnet, time of day, query type, and more. Query resolution policies allow you to configure the DNS server to respond differently to queries from different sources, including ignoring queries from specific subnets while responding to others.
To implement this requirement, you would create DNS query resolution policies using PowerShell cmdlets like Add-DnsServerQueryResolutionPolicy. You specify client subnet conditions that define which subnets are allowed to receive responses, and set the action to DENY for queries from unauthorized subnets and ALLOW for authorized subnets. Zone transfer permissions are configured separately through zone properties, allowing you to maintain zone transfers to authorized secondary servers regardless of the query resolution policies. This approach provides precise control—you can restrict general query responses to specific subnets while still permitting necessary administrative operations like zone transfers to designated servers.
Option B is incorrect because DNS server recursion settings control whether the DNS server performs recursive resolution on behalf of clients (resolving queries for domains it doesn’t host authoritatively), not which clients can send queries to the server. Recursion settings can be globally enabled or disabled, or restricted to specific client addresses, but they affect recursive query processing rather than providing comprehensive query filtering. While you can configure some access control through recursion settings, they don’t provide the granular subnet-based query response control that DNS policies offer, and they’re primarily designed to prevent open resolver abuse rather than implement detailed access policies.
Option C is incorrect because zone transfer settings control which DNS servers are authorized to receive copies of zone data through zone transfers (AXFR/IXFR), not which clients can query the DNS server for name resolution. Zone transfer restrictions are configured per zone to specify which secondary servers can replicate zone data. While zone transfer settings are important for securing zone data, they operate independently from query response policies. The question requires both restricting query responses to specific subnets and allowing zone transfers, which means you need to configure query policies separately from zone transfer settings.
Option D is incorrect because DNS server listening addresses determine which network interfaces and IP addresses the DNS server binds to and listens on for incoming requests. Configuring listening addresses controls whether the server responds on specific network interfaces but doesn’t provide subnet-level granularity for access control. If the server listens on an interface connected to a network, it will generally respond to all queries received on that interface. Listening addresses are about which interfaces the service uses, not about filtering which clients can successfully query the server based on their source subnet.
Question 130
You manage a Windows Server 2022 Hyper-V environment with multiple virtual machines. You need to implement a solution that limits the storage IOPS available to a specific virtual machine to prevent it from consuming excessive storage resources. What should you configure?
A) Storage QoS maximum IOPS policy
B) Virtual hard disk QoS settings
C) Hyper-V storage priority
D) Storage Spaces Direct performance tier
Answer: A
Explanation:
The correct answer is option A. Storage Quality of Service (Storage QoS) policies allow you to set minimum and maximum IOPS limits for virtual machines to prevent any single VM from monopolizing storage resources and ensure fair resource distribution. Storage QoS is available in Windows Server 2012 R2 and later, with enhanced capabilities in Server 2016 and beyond, particularly in Scale-Out File Server and Storage Spaces Direct environments.
To configure Storage QoS, you can use Hyper-V Manager to access the virtual machine’s settings, navigate to the hard drive configuration, and enable QoS management by specifying maximum IOPS values. Alternatively, you can use PowerShell with the Set-VMHardDiskDrive cmdlet and parameters like -MaximumIOPS to set limits. In Storage Spaces Direct or Scale-Out File Server environments, you can create more sophisticated QoS policies using the New-StorageQosPolicy cmdlet that define both minimum (guaranteed) and maximum (throttled) IOPS values. These policies ensure that critical VMs receive necessary performance while preventing less important VMs from creating “noisy neighbor” problems that degrade performance for other workloads.
Option B is incorrect because while you configure QoS on individual virtual hard disks attached to VMs, the term “virtual hard disk QoS settings” isn’t specific enough and might be confused with other VHD properties. The correct and specific feature is Storage QoS, which operates at the policy and management level. Additionally, virtual hard disk settings include many other properties (size, type, location) that aren’t related to performance throttling. The most accurate answer identifies Storage QoS as the specific feature designed for this purpose rather than using the generic term “virtual hard disk settings.”
Option C is incorrect because while Hyper-V does have storage-related priority settings, these are typically about relative priority during resource contention rather than absolute IOPS limits. Storage priority settings might influence which VM gets preferential treatment when storage resources are constrained, but they don’t establish hard IOPS caps that prevent a VM from exceeding specific performance thresholds. Storage QoS provides the explicit maximum IOPS limiting functionality required in the question, whereas priority settings provide relative importance rankings during competition for resources.
Option D is incorrect because Storage Spaces Direct performance tiers refer to the classification of storage into different performance categories (typically SSD-based fast tiers and HDD-based capacity tiers) for automated tiering of frequently accessed data to faster media. Tiering is about where data is physically stored based on access patterns to optimize performance, not about limiting how much IOPS individual workloads can consume. Performance tiers improve overall storage efficiency but don’t provide per-VM IOPS throttling capabilities. Tiering and QoS are complementary features that serve different purposes in storage management.
Question 131
You have a Windows Server 2022 file server with SMB shares hosting confidential documents. You need to configure the file server to encrypt all SMB traffic for specific shares. What should you configure?
A) SMB encryption on individual shares
B) IPsec encryption policies
C) BitLocker Drive Encryption
D) EFS encryption on folders
Answer: A
Explanation:
The correct answer is option A. SMB encryption, available in SMB 3.0 and later, provides end-to-end encryption of SMB data in transit between clients and servers. You can enable SMB encryption on a per-share basis, ensuring that all data transferred to and from specific shares is encrypted regardless of whether it traverses trusted or untrusted networks. This protects confidential data from eavesdropping and man-in-the-middle attacks during network transmission.
To configure SMB encryption on specific shares, you can use PowerShell with the Set-SmbShare cmdlet and the -EncryptData $true parameter, or create new shares with encryption enabled using New-SmbShare -EncryptData $true. When SMB encryption is enabled on a share, only clients supporting SMB 3.0 or later can access it, and all data transfer is automatically encrypted using AES-CCM or AES-GCM algorithms. This provides transparent encryption without requiring certificates, VPN tunnels, or complex infrastructure. Users experience no difference in functionality while benefiting from enhanced security. You can also enable SMB encryption globally for the entire server using Set-SmbServerConfiguration -EncryptData $true.
Option B is incorrect because while IPsec encryption policies can encrypt network traffic at the IP layer, including SMB traffic, IPsec requires more complex configuration involving security rules, certificates or preshared keys, and potential firewall adjustments. IPsec provides network-layer encryption for all traffic between systems, which may be excessive when you only need to protect specific file shares. SMB encryption is simpler, application-specific, and designed specifically for SMB file sharing scenarios. IPsec would work but represents a more complex solution than necessary for the stated requirement.
Option C is incorrect because BitLocker Drive Encryption provides encryption for data at rest on physical disks, protecting data if drives are stolen or removed from servers. BitLocker encrypts entire volumes but doesn’t encrypt data during network transmission. When users access files over the network from a BitLocker-encrypted volume, the data is decrypted on the server and transmitted across the network without encryption unless additional measures like SMB encryption or IPsec are implemented. BitLocker and SMB encryption serve complementary but different purposes—BitLocker protects stored data while SMB encryption protects data in transit.
Option D is incorrect because EFS (Encrypting File System) provides file-level encryption for data at rest on NTFS volumes, encrypting individual files and folders using users’ certificates. Like BitLocker, EFS protects data stored on disk but doesn’t encrypt data during network transmission. When users access EFS-encrypted files over the network, the files are decrypted on the server during the read process and transmitted without encryption unless additional transit protection is configured. EFS is useful for protecting sensitive files locally but doesn’t address the requirement for encrypting SMB traffic during transmission.
Question 132
You manage a Windows Server 2022 environment with Active Directory Certificate Services. You need to configure the Certificate Authority to automatically publish the Certificate Revocation List (CRL) to a web server for distribution. What should you configure?
A) CRL distribution points in the CA properties
B) Authority Information Access (AIA) extensions
C) CDP locations and CRL publication settings
D) Certificate templates autoenrollment
Answer: C
Explanation:
The correct answer is option C. CDP (CRL Distribution Point) locations and CRL publication settings in the Certificate Authority properties control where CRLs are published and how clients can retrieve them. To automatically publish CRLs to a web server, you configure both the publication path (where the CA writes the CRL files) and the distribution points (the URLs clients use to retrieve CRLs).
In the Certification Authority console, you access the CA properties and navigate to the Extensions tab where you configure CDP locations. You specify file system paths where the CA should publish CRLs (like a local folder that’s mapped or synced to the web server, or a UNC path to the web server’s wwwroot directory). You also configure HTTP URLs that clients will use to download CRLs and ensure the “Include in CRLs” and “Include in the CDP extension of issued certificates” options are properly selected. The CA automatically publishes updated CRLs to the configured locations according to the CRL publication schedule. For web distribution, you must ensure the web server has appropriate directory permissions and that IIS is configured to serve .crl files with the correct MIME type.
Option A is partially correct but incomplete. While CRL distribution points are indeed configured in the CA properties, simply configuring CDP URLs without also setting up the publication paths and settings won’t result in automatic CRL publishing. The answer needs to include both distribution points (where clients retrieve CRLs) and publication settings where the CA writes CRLs). Option C more completely describes both aspects—CDP locations specify the URLs clients use, and publication settings control how and where the CA publishes the files. The configuration requires both components working together.
Option B is incorrect because Authority Information Access (AIA) extensions specify where clients can retrieve the CA’s certificate (not the CRL) and where to find the Online Certificate Status Protocol (OCSP) responder if configured. AIA helps clients validate certificate chains by locating parent CA certificates, but it doesn’t control CRL publishing or distribution. While AIA and CDP are both configured in the same Extensions tab of CA properties and both are included in issued certificates, they serve different purposes—AIA points to CA certificates and OCSP services, while CDP points to CRLs.
Option D is incorrect because certificate template autoenrollment settings control automatic enrollment and renewal of certificates for users and computers based on Group Policy, not CRL publication and distribution. Autoenrollment is about certificate lifecycle management for certificate recipients, while CRL publication is about certificate validity checking infrastructure. These are separate aspects of PKI management. Autoenrollment ensures entities receive certificates automatically, while CRL publication ensures those certificates’ revocation status can be checked.
Question 133
You have a Windows Server 2022 server running Hyper-V with several generation 2 virtual machines. You need to implement a solution that allows you to create checkpoints that exclude the virtual machine’s memory state to reduce storage requirements. What type of checkpoint should you configure?
A) Standard checkpoints
B) Production checkpoints
C) Automatic checkpoints
D) Manual checkpoints
Answer: B
Explanation:
The correct answer is option B. Production checkpoints (formerly known as standard checkpoints in earlier versions, but terminology changed in Windows Server 2016) use Volume Shadow Copy Service (VSS) inside the guest operating system to create application-consistent snapshots without saving the VM’s memory state. Production checkpoints create a point-in-time backup of the virtual machine that can be restored without impacting running applications, and they consume significantly less storage than checkpoints that include memory state.
When you create a production checkpoint, the guest operating system’s VSS writers quiesce applications and flush buffers to ensure data consistency, then a snapshot of the virtual disks is created. Because memory state isn’t captured, production checkpoints are smaller and faster to create. They’re designed for production environments where application consistency is important and where you want backup-like functionality without the storage overhead of memory state. To configure Hyper-V to use production checkpoints, you access the VM settings, navigate to the Checkpoints section, and select “Production checkpoints.” You can also configure a fallback to standard checkpoints if production checkpoints fail (if VSS isn’t available in the guest).
Option A is incorrect in the modern context because “standard checkpoints” in Windows Server 2016 and later refer to the checkpoint type that includes memory state, saved device state, and virtual disk state. Standard checkpoints capture the complete running state of the VM, allowing you to restore to the exact point in time including running applications and open files, but they consume significantly more storage because they include memory dumps. This is the opposite of what the question asks for—the requirement is to exclude memory state to reduce storage, which standard checkpoints don’t do.
Option C is incorrect because “automatic checkpoints” isn’t a specific checkpoint type in Hyper-V—it refers to automatic checkpoint creation based on configured triggers or schedules, but the underlying checkpoints can be either production or standard type. Automatic checkpoint creation is about when checkpoints are taken (automatically vs. manually), not about what data they include. You can configure automatic checkpoint creation in conjunction with either production or standard checkpoint types, so “automatic” doesn’t answer the question about which type excludes memory state.
Option D is incorrect because “manual checkpoints” refers to the method of creation (administrator-initiated versus automatic), not the type of checkpoint or what data it includes. Manual checkpoints can be either production or standard checkpoints depending on the VM’s checkpoint configuration. The distinction between manual and automatic is about the triggering mechanism, while the distinction between production and standard is about what data is captured. The question asks about checkpoint type based on data inclusion, not creation method.
Question 134
You manage a Windows Server 2022 DNS environment with multiple DNS servers. You need to implement a solution that provides different DNS responses to clients based on their geographic location. What should you configure?
A) DNS policies with location-based query resolution
B) Split-brain DNS with multiple zones
C) Conditional forwarders
D) GlobalNames zone
Answer: A
Explanation:
The correct answer is option A. DNS policies in Windows Server 2016 and later support location-based query resolution, also known as geo-location based traffic management. This feature allows DNS servers to respond to queries with different answers based on the client’s subnet or geographic location, enabling intelligent traffic direction for load balancing, disaster recovery, and performance optimization.
To implement geo-location based DNS responses, you create DNS client subnets that represent different geographic locations using Add-DnsServerClientSubnet, then create DNS zone scopes that contain different resource records for the same names using Add-DnsServerZoneScope. Finally, you create DNS policies using Add-DnsServerQueryResolutionPolicy that evaluate the client’s source subnet and return responses from the appropriate zone scope. For example, you might configure www.contoso.com to resolve to a US-based server IP (192.168.1.10) for clients from US subnets, and to an EU-based server IP (10.0.0.10) for clients from European subnets. This provides intelligent DNS-based traffic management without requiring expensive third-party global load balancing solutions.
Option B is incorrect because split-brain DNS (also called split-horizon DNS) refers to providing different DNS responses to internal versus external clients, typically by maintaining separate zones with the same name on internal and external DNS servers. Split-brain DNS is about internal/external differentiation rather than geographic location. While split-brain DNS provides different answers to different client populations, it’s typically based on a simple inside/outside network boundary rather than fine-grained geographic locations. DNS policies provide much more granular control based on client subnets representing multiple geographic locations.
Option C is incorrect because conditional forwarders direct queries for specific domains to designated DNS servers. Conditional forwarders are used for namespace delegation and cross-forest name resolution scenarios, such as forwarding queries for partner.com to a partner organization’s DNS servers. Conditional forwarders don’t provide different responses based on client location—they forward queries based on the domain name being queried, not based on who is querying. All clients querying the same domain name through conditional forwarders receive the same forwarding treatment regardless of their location.
Option D is incorrect because a GlobalNames zone is a special DNS zone that provides single-label name resolution (like “servername” instead of “servername.contoso.com”) across an entire forest without requiring WINS. GlobalNames zones help organizations migrate away from WINS by supporting single-label names in DNS. GlobalNames zones don’t provide location-based query responses—they simply enable single-label name resolution across domains. All clients querying a GlobalNames zone receive the same answer regardless of their location.
Question 135
You have a Windows Server 2022 server running the Network Policy Server (NPS) role. You need to configure NPS to log authentication requests to a SQL Server database for compliance auditing. What should you configure?
A) SQL Server logging in NPS accounting settings
B) Windows Event Forwarding to SQL Server
C) RADIUS accounting to SQL Server
D) Connection request forwarding to SQL database
Answer: A
Explanation:
The correct answer is option A. NPS supports SQL Server logging as an accounting method, allowing authentication and authorization requests to be logged directly to a SQL Server database for long-term storage, advanced reporting, and compliance auditing. SQL Server logging provides more robust storage, better query capabilities, and easier integration with reporting tools compared to text file logging.
To configure SQL Server logging in NPS, you access the NPS console, navigate to Accounting properties, and configure the accounting provider to log to SQL Server. You specify the SQL Server instance name, database name, and authentication method (Windows authentication or SQL authentication). NPS automatically creates the necessary database tables if they don’t exist, and begins logging authentication requests, accounting data, and policy evaluation results to the database. SQL Server logging can be configured in addition to or instead of local file logging, providing flexible options for meeting different compliance and operational requirements. You should ensure SQL Server has adequate disk space and performance capacity to handle the logging volume.
Option B is incorrect because Windows Event Forwarding is a feature for collecting event logs from multiple servers to a central collector server for log aggregation and monitoring. While you could configure Event Forwarding to collect NPS events and then use third-party tools to parse those events into SQL Server, this is an indirect and complex approach. NPS has native SQL Server logging capabilities that directly write structured data to SQL Server without requiring intermediate event collection infrastructure. Event Forwarding is useful for general event log collection but isn’t the native NPS solution for SQL Server logging.
Option C is incorrect because “RADIUS accounting to SQL Server” is essentially what SQL Server logging in NPS provides, but the terminology isn’t quite accurate in the context of NPS configuration. In NPS, you configure this through the “Accounting” settings and select SQL Server as the logging destination. “RADIUS accounting” typically refers to the protocol and process of accounting data exchange between RADIUS clients (like VPN servers) and the RADIUS server (NPS), rather than the specific configuration setting. The more precise answer identifies this as SQL Server logging in NPS accounting settings.
Option D is incorrect because connection request forwarding refers to RADIUS proxy functionality where NPS forwards authentication requests to other RADIUS servers rather than processing them locally. This is used in scenarios where NPS acts as a RADIUS proxy to route requests to backend RADIUS servers, such as in service provider environments or when authenticating users from different organizations. Connection request forwarding is about routing RADIUS requests, not about logging to SQL Server. Forwarding and logging are separate NPS functions with different purposes.
Question 136
You manage a Windows Server 2022 environment with multiple file servers. You need to implement a solution that prevents users from saving files with specific file extensions to designated folders. What should you implement?
A) File Server Resource Manager file screens
B) NTFS permissions
C) Dynamic Access Control
D) AppLocker policies
Answer: A
Explanation:
The correct answer is option A. File Server Resource Manager (FSRM) file screens allow administrators to control what types of files users can save to specific folders based on file extensions. File screens use file groups (predefined or custom collections of file extensions) and can be configured to actively block or passively monitor unauthorized file types, providing both enforcement and compliance monitoring.
To implement file screens, you install the File Server Resource Manager role service, create or modify file groups to include the file extensions you want to block (such as .exe, .mp3, .avi, or custom extensions), then create file screens on specific folders or volumes. File screens can be configured as active (blocking file saves that match the screen) or passive (allowing saves but generating notifications for monitoring). When users attempt to save blocked file types, they receive an error message that can be customized to explain the policy. FSRM file screens operate at the file system level and work regardless of how users access files—through mapped drives, UNC paths, or applications. This provides comprehensive protection against unwanted file types being stored on corporate file servers.
Option B is incorrect because NTFS permissions control who can access files and folders and what actions they can perform (read, write, modify, delete, etc.), but permissions don’t provide granular control over file types. NTFS permissions operate on security principals (users and groups) and access levels, not file extensions or content types. You could theoretically restrict write access completely, but you cannot use NTFS permissions to allow users to save some file types while blocking others in the same folder. NTFS permissions are about access control, while file screens are about content filtering.
Option C is incorrect because Dynamic Access Control provides sophisticated claims-based access control for files based on user attributes, device properties, resource classifications, and other conditions. While DAC is powerful for implementing complex access policies like “only allow access to confidential documents from managed devices,” it’s designed for access control decisions rather than file type filtering during save operations. DAC evaluates whether users can access existing files based on conditions, but it doesn’t inherently block specific file extensions from being saved. File screens are the purpose-built tool for file type filtering.
Option D is incorrect because AppLocker policies control which applications and scripts users can execute on Windows systems based on various rules including publisher, path, and file hash. AppLocker prevents unauthorized applications from running, providing application whitelisting for security. While AppLocker deals with file execution, it doesn’t control what files users can save to network file shares. AppLocker and FSRM file screens serve different purposes—AppLocker controls executable content on endpoints, while FSRM file screens control file storage on servers.
Question 137
You have a Windows Server 2022 Hyper-V host with limited physical memory. You need to configure virtual machines to use less physical RAM when idle while ensuring they can quickly access additional memory when needed. What should you configure?
A) Dynamic Memory with appropriate minimum and maximum values
B) Smart Paging
C) Memory weight and priority
D) NUMA spanning
Answer: A
Explanation:
The correct answer is option A. Dynamic Memory allows Hyper-V to automatically adjust the amount of physical RAM allocated to virtual machines based on their actual memory demands. When VMs are idle or have low memory requirements, dynamic memory reduces their physical memory allocation, freeing RAM for other VMs or the host. When VMs need more memory, Hyper-V dynamically allocates additional RAM up to the configured maximum, ensuring responsive performance during periods of high demand.
To configure dynamic memory effectively, you access each VM’s settings, enable Dynamic Memory, and configure three key parameters: startup RAM (memory allocated during VM boot), minimum RAM (lowest amount the VM can be reduced to during operation), and maximum RAM (highest amount the VM can use). You also configure memory buffer (percentage of additional memory Hyper-V maintains as a cushion) and memory weight (priority for memory allocation during contention). Dynamic Memory uses synthetic memory to provide balloon driver functionality in the guest OS, allowing sophisticated memory management. This configuration maximizes host memory efficiency while maintaining good VM performance, making it ideal for environments with limited physical RAM and variable workload demands.
Option B is incorrect because Smart Paging is a specialized feature that only activates during VM startup when insufficient physical memory is available to meet the VM’s startup RAM requirement. If a VM with dynamic memory configured needs more RAM than its minimum to start up, but physical memory is overcommitted, Smart Paging temporarily uses disk-based paging to supplement memory during the startup phase. Once the VM is running, Smart Paging is not used—the VM operates with dynamic memory. Smart Paging is a safety net for startup scenarios, not a solution for ongoing memory management when VMs are idle or active.
Option C is incorrect because memory weight and priority settings influence which VMs receive preferential memory allocation when physical RAM is constrained and multiple VMs are competing for memory resources. Higher weight values give VMs priority during memory allocation decisions, but weight doesn’t automatically adjust memory allocation based on VM idle or active states. Weight is a relative priority mechanism for contention scenarios, while Dynamic Memory is an active memory management system that continuously adjusts allocations based on actual demand. Weight would be configured along with Dynamic Memory to prioritize certain VMs, but it doesn’t by itself provide the dynamic adjustment capability described in the question.
Option D is incorrect because NUMA (Non-Uniform Memory Architecture) spanning controls whether a virtual machine can use memory from multiple NUMA nodes in the physical server. When NUMA spanning is disabled, VMs are constrained to memory from a single NUMA node, which can provide better performance for NUMA-aware applications but limits available memory per VM. When enabled, VMs can access memory across NUMA nodes if needed. NUMA spanning is about memory topology and access patterns across processor nodes, not about dynamically adjusting memory allocation based on VM idle or active states. NUMA configuration affects performance characteristics but doesn’t reduce memory usage when VMs are idle.
Question 138
You manage a Windows Server 2022 environment with Active Directory Domain Services. You need to configure domain controllers to reject authentication requests using NTLM and only accept Kerberos authentication. What should you configure?
A) Network security: LAN Manager authentication level Group Policy
B) Kerberos Policy settings
C) Authentication Policies
D) Security Options for NTLM
Answer: A
Explanation:
The correct answer is option A. The “Network security: LAN Manager authentication level” Group Policy setting controls which authentication protocols Windows systems will accept and use. By configuring this policy to “Send NTLMv2 response only. Refuse LM & NTLM” or the more restrictive “Send NTLMv2 response only. Refuse LM, NTLM & NTLMv2” (forcing Kerberos only), you can restrict domain controllers to reject NTLM authentication and only accept Kerberos.
This policy is located under Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options. When set to the most restrictive level on domain controllers, clients attempting to authenticate using NTLM will be rejected, forcing them to use Kerberos authentication. This significantly improves security because Kerberos provides mutual authentication, stronger encryption, and better protection against credential theft attacks compared to NTLM. However, you must carefully test this configuration because some legacy applications, services, and authentication scenarios may require NTLM fallback, and blocking NTLM entirely can break those scenarios. Microsoft recommends auditing NTLM usage first using NTLM auditing features before enforcing Kerberos-only authentication.
Option B is incorrect because Kerberos Policy settings configure Kerberos-specific parameters like maximum ticket lifetime, maximum tolerance for computer clock synchronization, and ticket renewal settings. These policies control how Kerberos operates but don’t disable NTLM authentication. Kerberos policies adjust Kerberos behavior and security characteristics but don’t prevent systems from falling back to NTLM when Kerberos isn’t available or configured. To reject NTLM and enforce Kerberos-only authentication, you need to configure authentication level policies, not Kerberos operational parameters.
Option C is incorrect because Authentication Policies (part of Authentication Policy Silos introduced in Windows Server 2012 R2) provide advanced access control that can restrict authentication based on device and user characteristics, but they’re primarily focused on controlling who can authenticate where and from which devices. While Authentication Policies provide sophisticated authentication controls, the specific configuration for rejecting NTLM in favor of Kerberos is controlled through the LAN Manager authentication level setting. Authentication Policies are about restricting authentication scenarios based on conditions, not about blocking specific authentication protocols.
Option D is incorrect because while there are various security options related to NTLM (like “Network security: Restrict NTLM: Audit NTLM authentication in this domain” and “Network security: Restrict NTLM: NTLM authentication in this domain”), the most direct and effective way to reject NTLM authentication and enforce Kerberos is through the LAN Manager authentication level policy. The NTLM restriction policies provide granular control and auditing, but the authentication level setting is the fundamental control for determining which protocols are acceptable. The authentication level policy provides the clearest and most direct enforcement of Kerberos-only authentication.
Question 139
You have a Windows Server 2022 server running Internet Information Services (IIS) hosting a web application. You need to configure the application to automatically start when IIS starts, even if no requests have been received. What should you configure?
A) Application pool start mode to AlwaysRunning and application preload
B) Application pool recycling settings
C) IIS application initialization module
D) Web garden configuration
Answer: A
Explanation:
The correct answer is option A. Configuring the application pool start mode to “AlwaysRunning” combined with enabling application preload (preloadEnabled attribute) ensures that web applications start automatically when IIS starts and remain running even without incoming requests. This configuration eliminates “cold start” delays where users experience slow response times when accessing an application for the first time after IIS restart or application pool recycling.
To implement this, you configure two settings: First, in the application pool properties, set the Start Mode to “AlwaysRunning” instead of “OnDemand” (the default). This keeps the worker process running continuously. Second, for the application itself in IIS Manager, set the preloadEnabled attribute to True in the application’s advanced settings, or configure it in applicationHost.config. When preloadEnabled is true, IIS automatically makes requests to the application during startup to initialize it fully, loading assemblies, compiling code, and warming up caches. This combination provides the best user experience by ensuring applications are always ready to serve requests immediately without initialization delays.
Option B is incorrect because application pool recycling settings control when and how application pools restart (based on time intervals, request counts, memory thresholds, or schedules). Recycling is about refreshing application pools to maintain health and release resources, not about ensuring applications start automatically when IIS starts. Recycling settings determine when application pools restart during operation, but they don’t control startup behavior or eliminate cold starts. After recycling, without AlwaysRunning and preload configuration, the application pool waits for the first request before initializing, which is what you’re trying to avoid.
Option C is incorrect while it’s partially right in concept. The Application Initialization module is indeed involved in the preload functionality, but simply installing or enabling the module alone isn’t sufficient—you must also configure the application pool start mode and application preload settings to achieve the desired behavior. Application Initialization is the underlying IIS feature that provides the capability, but it requires proper configuration of both the application pool (AlwaysRunning) and the application (preloadEnabled) to function. The module enables the functionality, but the configuration makes it work.
Option D is incorrect because web garden configuration involves running multiple worker processes for a single application pool to distribute workload across multiple processes. Web gardens can improve throughput on multi-core systems but don’t address automatic startup or cold start elimination. Web gardens are about concurrency and load distribution during operation, not about ensuring applications start automatically when IIS starts. Additionally, web gardens can complicate applications that use in-process session state or other process-local resources. Web gardens and automatic startup are orthogonal concepts serving different purposes.
Question 140
You manage a Windows Server 2022 environment with Distributed File System Replication (DFSR) configured between multiple file servers. You need to configure replication to only occur during off-peak hours to conserve bandwidth. What should you configure?
A) Replication group schedule
B) Connection bandwidth throttling
C) Staging folder quota
D) Conflict resolution policy
Answer: A
Explanation:
The correct answer is option A. The replication group schedule in DFSR controls when replication is allowed to occur between members of a replication group. By configuring a custom schedule, you can specify that replication should only occur during specific time windows, such as overnight hours when network bandwidth is less constrained by user activity, effectively limiting replication to off-peak hours while preventing it during business hours.
To configure the replication schedule, you open DFS Management, navigate to the replication group, access the Replication Group Schedule (for all connections) or individual connection schedules, and define time blocks when replication is enabled versus disabled. The schedule uses a weekly calendar grid where you can select specific hours and days for replication. During scheduled replication windows, DFSR actively replicates changes between servers. During blocked time periods, replication is suspended and queues changes for transmission during the next available window. This approach provides precise control over when replication consumes network bandwidth, allowing you to balance data currency requirements with bandwidth conservation during peak business hours when users need maximum network capacity.
Option B is incorrect because connection bandwidth throttling limits the amount of bandwidth DFSR can use during replication but doesn’t control when replication occurs. Bandwidth throttling sets a maximum replication speed (in kilobits per second) to prevent DFSR from saturating network links, but replication continues 24/7 within the throttling limits. Throttling is useful for limiting replication impact but doesn’t confine replication to specific time windows. For off-peak-only replication, you need schedule-based control rather than rate-based throttling. Throttling and scheduling can be used together—scheduling determines when, throttling determines how fast.
Option C is incorrect because the staging folder quota determines how much disk space is allocated for the staging area where DFSR stores files waiting to be replicated. The staging folder is a temporary holding area that prevents replication failures when large files are being processed. Staging folder size affects how many or how large files can be queued for replication but doesn’t control when replication occurs or how much bandwidth it uses. Insufficient staging space can cause replication delays or errors, but adjusting staging quota doesn’t limit replication to off-peak hours—it’s about local storage capacity for replication operations.
Option D is incorrect because conflict resolution policy determines what happens when the same file is modified on multiple servers simultaneously before replication can synchronize the changes. DFSR uses “last writer wins” by default, but you can view and manage conflicts through DFS Management. Conflict resolution is about handling simultaneous conflicting changes, not about scheduling when replication occurs. Conflict policy ensures data consistency when conflicts arise but has no relationship to controlling replication timing or bandwidth usage. Conflict resolution and replication scheduling address completely different aspects of DFSR management.