Microsoft AZ-800 Administering Windows Server Hybrid Core Infrastructure Exam Dumps and Practice Test Questions Set 6 Q 101-120

Visit here for our full Microsoft AZ-800 exam dumps and practice test questions.

Question 101

You have a Windows Server 2022 server running the Web Server (IIS) role. You need to configure the web server to use Server Name Indication (SNI) to host multiple HTTPS websites with different SSL certificates on the same IP address and port. What should you do?

A) Configure host headers for each website

B) Enable SNI in the SSL certificate binding for each site

C) Install a wildcard certificate

D) Configure multiple IP addresses on the server

Answer: B

Explanation:

The correct answer is option B. Server Name Indication (SNI) is an extension to the TLS protocol that allows a web server to host multiple HTTPS websites with different SSL certificates on the same IP address and port 443. To implement SNI in IIS, you must enable the “Require Server Name Indication” option when configuring the HTTPS binding for each website and assign the appropriate SSL certificate to each binding.

When you configure an HTTPS binding in IIS for a website, you access the site’s bindings settings and add or edit an HTTPS binding. In the binding configuration, you specify the host name, select the appropriate SSL certificate from the certificate store, and check the “Require Server Name Indication” checkbox. This tells IIS to use SNI for that binding, allowing the server to examine the hostname in the client’s TLS handshake request and select the correct certificate. SNI enables efficient use of IP addresses and simplifies HTTPS hosting by eliminating the need for dedicated IP addresses for each secure website. All modern browsers support SNI, making it the standard approach for hosting multiple HTTPS sites.

Option A is incorrect because while host headers are necessary for hosting multiple websites on the same IP address for HTTP traffic, they alone don’t solve the HTTPS certificate problem. Traditional HTTPS bindings without SNI require either different IP addresses for each site or a single certificate that covers all hostnames (like a wildcard or SAN certificate). Host headers work at the HTTP protocol level, but SSL/TLS negotiation happens before HTTP headers are exchanged, which is why SNI was created to extend TLS to support hostname indication during the handshake.

Option C is incorrect because while installing a wildcard certificate (like *.contoso.com) can cover multiple subdomains under a single certificate, it doesn’t provide the flexibility of using different certificates for different websites. A wildcard certificate is a valid solution when all sites are subdomains of the same parent domain and you want them to share a certificate, but it doesn’t allow you to use separate certificates with different properties or from different certificate authorities. SNI provides more granular control by allowing each website to have its own specific certificate.

Option D is incorrect because configuring multiple IP addresses on the server is the traditional pre-SNI method for hosting multiple HTTPS websites, where each site requires its own dedicated IP address. While this approach still works, it’s inefficient in terms of IP address consumption and is unnecessary with SNI support. SNI was specifically developed to solve the IP address scarcity problem for HTTPS hosting. Using multiple IP addresses defeats the purpose of implementing SNI and wastes valuable IPv4 address space.

Question 102

You manage a Windows Server 2022 environment with multiple servers in different geographic locations. You need to implement a time synchronization solution that ensures all servers maintain accurate time within 1 millisecond of UTC. Which Windows Time service configuration should you implement?

A) Configure all servers to sync with an internal domain controller

B) Configure servers to sync with an external NTP server pool using symmetric key authentication

C) Configure servers to sync with a stratum 1 time source using Windows Time service

D) Enable VM time synchronization integration services

Answer: C

Explanation:

The correct answer is option C. To achieve time accuracy within 1 millisecond of UTC, you need to configure your Windows Time service to synchronize with a highly accurate stratum 1 time source. Stratum 1 servers are directly connected to reference clocks (such as GPS, atomic clocks, or other precise time sources) and provide the most accurate time available. Windows Time service (W32Time) can be configured to synchronize with external stratum 1 NTP servers to achieve the required precision.

For enterprise environments requiring millisecond-level accuracy, you would designate one or more servers as the primary time source and configure them to sync with multiple stratum 1 servers for redundancy and accuracy. These servers become your internal stratum 2 sources, and other servers in your organization sync with them. You should also configure Windows Time service with appropriate polling intervals, correction settings, and special poll intervals to maintain tight time synchronization. In Active Directory environments, the PDC Emulator typically serves as the authoritative time source that syncs with external stratum 1 servers, while all other domain members sync with the domain hierarchy.

Option A is incorrect because while configuring all servers to sync with an internal domain controller (typically the PDC Emulator) is the standard practice in Active Directory environments, this approach alone doesn’t guarantee 1 millisecond accuracy unless that domain controller is configured to sync with a highly accurate external time source. Internal domain controllers by themselves aren’t accurate enough to provide millisecond-level precision—they must be synchronized with stratum 1 or stratum 2 time sources. The accuracy of the entire time hierarchy depends on the accuracy of the root time source.

Option B is incorrect because while using an external NTP server pool with authentication is a good practice for security and reliability, NTP pools typically consist of stratum 2 or stratum 3 servers with varying accuracy levels. Public NTP pools like pool.ntp.org are excellent for general time synchronization (providing accuracy in the tens of milliseconds range), but they don’t guarantee the 1 millisecond accuracy required in this scenario. For millisecond-level precision, you need dedicated stratum 1 sources or specialized time services designed for high-precision requirements.

Option D is incorrect because VM time synchronization integration services are Hyper-V features that sync virtual machine time with the host’s time. While this feature is useful for preventing time drift in virtualized environments, it doesn’t address the fundamental requirement of synchronizing with an accurate UTC source. The accuracy of VM time synchronization depends entirely on how accurate the host’s time is. Simply enabling integration services doesn’t ensure 1 millisecond accuracy—the host itself must be properly synchronized with stratum 1 time sources.

Question 103

You have a Windows Server 2022 server running the Print and Document Services role. You need to configure the print server to automatically remove printer drivers that haven’t been used for 60 days. What should you configure?

A) Print Management console driver isolation settings

B) Print server properties cleanup settings

C) Group Policy printer driver installation restrictions

D) Print Management console automated tasks

Answer: B

Explanation:

The correct answer is option B. The print server properties in Print Management include cleanup settings that allow you to automatically remove unused printer drivers after a specified period. To configure this, you open Print Management, right-click on the print server, select Properties, and navigate to the “Advanced” tab where you’ll find the option “Remove unused drivers” with a configurable time period.

When you enable this setting and specify 60 days, the print server automatically identifies printer drivers that haven’t been used by any printers for that duration and removes them from the driver store. This automated cleanup helps maintain a lean driver repository, reduces potential security vulnerabilities from outdated drivers, and prevents the accumulation of unnecessary driver files that can consume disk space and complicate print server management. The cleanup process runs automatically based on the configured schedule, requiring no manual intervention once properly configured.

Option A is incorrect because driver isolation settings in Print Management are used to improve print server stability and security by running printer drivers in isolated processes separate from the print spooler service. When a driver crashes in isolation mode, it doesn’t bring down the entire print spooler. While driver isolation is an important feature for reliability, it doesn’t provide any functionality for automatically removing unused drivers. Driver isolation and driver cleanup serve completely different purposes in print server management.

Option C is incorrect because Group Policy printer driver installation restrictions are used to control which users or groups can install printer drivers and which drivers are allowed to be installed based on security policies. These restrictions help prevent unauthorized driver installation and enforce security standards, but they don’t provide automated cleanup of existing unused drivers. Group Policy restrictions are preventive controls, not maintenance or cleanup mechanisms for removing drivers already installed on the print server.

Option D is incorrect because while Print Management does allow you to create automated tasks for various printer-related activities (such as listing printers or monitoring printer status), there isn’t a built-in automated task specifically designed for removing unused drivers. The cleanup functionality for unused drivers is a server-level configuration in the print server properties, not something you implement through the automated tasks feature. Automated tasks are primarily for monitoring, reporting, and managing printer objects rather than driver maintenance.

Question 104

You manage a Windows Server 2022 environment with Active Directory Certificate Services (AD CS). You need to configure the Certificate Authority to automatically issue certificates to domain computers for IPsec authentication. What should you do?

A) Configure certificate auto-enrollment for the Computer certificate template

B) Create a custom certificate template with IPsec extensions

C) Enable certificate requests through the web enrollment interface

D) Configure the CA to use standalone mode

Answer: A

Explanation:

The correct answer is option A. Certificate auto-enrollment allows domain computers to automatically request, receive, and renew certificates without administrator intervention. To enable automatic certificate issuance for IPsec authentication, you need to configure auto-enrollment for a certificate template that includes the appropriate key usage and enhanced key usage extensions for IPsec, such as the Computer certificate template or a custom template based on it.

The configuration involves duplicating the Computer certificate template (or creating a new one), ensuring it has the proper IPsec-related extensions (IP security IKE intermediate purpose), configuring permissions to allow domain computers to enroll, and enabling auto-enrollment in the template properties. Then you publish the template to the Certificate Authority. On the client side, you configure Group Policy to enable certificate auto-enrollment for computers under Computer Configuration > Windows Settings > Security Settings > Public Key Policies > Certificate Services Client – Auto-Enrollment. Once configured, domain computers automatically request and receive certificates suitable for IPsec authentication during Group Policy refresh cycles.

Option B is incorrect because while you might create a custom certificate template for specific IPsec requirements, the built-in Computer certificate template already includes the necessary extensions for IPsec authentication (IP security IKE intermediate). Creating a custom template is an optional step if you need specific configurations beyond what the default Computer template provides, but it’s not required. The key requirement is configuring auto-enrollment, not necessarily creating a custom template. Simply creating a template without enabling auto-enrollment wouldn’t result in automatic certificate issuance.

Option C is incorrect because the web enrollment interface is a manual enrollment method where users or administrators navigate to a web page to request certificates interactively. While web enrollment is useful for certain scenarios (like issuing certificates to non-domain devices or when auto-enrollment isn’t available), it doesn’t provide automatic certificate issuance. The requirement specifically asks for automatic issuance to domain computers, which is achieved through auto-enrollment via Group Policy, not through enabling a web interface that requires manual interaction.

Option D is incorrect because configuring the CA to use standalone mode would actually prevent automatic certificate issuance to domain computers. Standalone CAs don’t integrate with Active Directory and can’t use AD-based certificate templates or auto-enrollment features. Enterprise CAs are required for auto-enrollment because they leverage Active Directory integration to publish certificate templates, verify user and computer identities, and automatically process certificate requests. Standalone CAs require manual approval for all certificate requests, which contradicts the requirement for automatic issuance.

Question 105

You have a Windows Server 2022 Hyper-V failover cluster hosting multiple virtual machines. You need to configure the cluster to automatically balance virtual machine workload across cluster nodes based on CPU and memory usage. Which feature should you enable?

A) Dynamic optimization in Virtual Machine Manager

B) Cluster-Aware Updating

C) Virtual machine live migration

D) Hyper-V Replica

Answer: A

Explanation:

The correct answer is option A. Dynamic optimization is a feature in System Center Virtual Machine Manager (SCVMM) that automatically monitors resource usage across Hyper-V hosts in a cluster and live migrates virtual machines to balance the workload. When dynamic optimization is enabled and configured, VMM continuously evaluates CPU, memory, disk I/O, and network usage on all hosts, and when it detects imbalances that exceed configured thresholds, it automatically triggers live migrations to redistribute virtual machines more evenly.

You configure dynamic optimization by setting aggressiveness levels (low, medium, or high) that determine how actively VMM will move VMs to achieve balance, and you specify the metrics and thresholds that trigger optimization. Dynamic optimization can run on a schedule (such as during off-peak hours) or continuously. This automated workload balancing helps maintain optimal performance across the cluster without manual intervention, prevents individual hosts from becoming overloaded while others are underutilized, and maximizes the return on hardware investments by efficiently distributing workloads.

Option B is incorrect because Cluster-Aware Updating (CAU) is a feature that automates the process of applying Windows updates to failover cluster nodes while maintaining cluster availability. CAU orchestrates taking nodes out of service one at a time, draining workloads, applying updates, rebooting if necessary, and bringing nodes back into service. While CAU does temporarily move virtual machines between nodes during the update process, it’s designed for patch management during maintenance windows, not for continuous workload balancing based on resource utilization.

Option C is incorrect because while virtual machine live migration is the underlying technology that enables moving running VMs between cluster nodes without downtime, simply enabling live migration doesn’t provide automatic workload balancing. Live migration is a capability that must be triggered either manually by administrators or automatically by higher-level management tools like SCVMM’s dynamic optimization. Live migration is necessary for dynamic optimization to work, but it alone doesn’t monitor resource usage or make intelligent decisions about when and where to move virtual machines.

Option D is incorrect because Hyper-V Replica is a disaster recovery feature that asynchronously replicates virtual machines from a primary site to a secondary site (or within the same site for different purposes). Replica creates copies of VMs on replica servers that can be activated during disasters or for testing purposes. Hyper-V Replica doesn’t perform load balancing or move running VMs between hosts based on resource utilization—it’s specifically designed for business continuity and disaster recovery, not performance optimization.

Question 106

You manage a Windows Server 2022 DNS server that hosts multiple DNS zones. You need to configure the DNS server to prevent it from responding to queries for domains it doesn’t host, while still allowing it to perform recursive queries for internal clients. What should you configure?

A) Disable recursion on the DNS server

B) Configure the DNS server to use root hints only

C) Enable DNS socket pool

D) Configure recursion scope to respond only to specific subnets

Answer: D

Explanation:

The correct answer is option D. Configuring recursion scope allows you to specify which clients or subnets are permitted to use the DNS server for recursive queries while denying recursion to others. This configuration enables you to prevent external or unauthorized systems from using your DNS server to perform recursive queries for domains you don’t host, while still allowing your internal clients to resolve external domain names through recursion.

To implement this, you access the DNS server properties, navigate to the Advanced tab, and configure the recursion settings to allow queries only from specific IP addresses or subnets that represent your internal network. You can also use the DNS policy feature in Windows Server 2016 and later to create more granular recursion policies based on various criteria including source subnet, time of day, and query type. This approach maintains full DNS functionality for authorized clients while protecting your DNS server from being exploited as an open resolver by external parties, which could be used for DNS amplification attacks or other malicious purposes.

Option A is incorrect because completely disabling recursion would prevent the DNS server from performing recursive queries on behalf of any client, including your internal clients. When recursion is disabled, the DNS server can only provide answers for zones it hosts authoritatively and will return referrals to other DNS servers rather than resolving queries. This would break external name resolution for internal clients who rely on the DNS server to recursively resolve internet domain names, which contradicts the requirement to allow recursive queries for internal clients.

Option B is incorrect because configuring the DNS server to use root hints only affects how the DNS server performs recursion when resolving names—it specifies that the server should start resolution by querying root DNS servers rather than using forwarders. This setting doesn’t restrict who can use the DNS server for recursive queries or prevent it from responding to queries for domains it doesn’t host. Root hints are used during the recursive resolution process but don’t provide access control for recursion functionality.

Option C is incorrect because the DNS socket pool is a security feature that randomizes the source port used for DNS queries to protect against cache poisoning attacks. Enabling the socket pool (which is actually enabled by default in modern Windows DNS servers) improves security by making it harder for attackers to predict the source port and inject malicious responses. However, socket pool configuration doesn’t control who can use the DNS server for recursive queries or prevent the server from responding to external recursion requests.

Question 107

You have a Windows Server 2022 server running the Remote Access role configured as a VPN server. You need to configure the VPN server to use RADIUS authentication against a Network Policy Server (NPS) for user authentication. What should you configure on the VPN server?

A) Configure RADIUS accounting in Remote Access Management

B) Configure the VPN server as a RADIUS client on NPS and configure RADIUS authentication on the VPN server

C) Install the Network Policy Server role on the VPN server

D) Configure connection request policies on the VPN server

Answer: B

Explanation:

The correct answer is option B. To use RADIUS authentication for VPN connections, you must configure a trust relationship between the VPN server and the NPS server by setting up the VPN server as a RADIUS client on the NPS server, and then configuring the VPN server to use RADIUS authentication. This two-part configuration establishes secure communication between the VPN server (RADIUS client) and the NPS server (RADIUS server).

On the NPS server, you add the VPN server as a RADIUS client by specifying its IP address or hostname and configuring a shared secret that both servers will use to encrypt RADIUS messages. On the VPN server, you open the Routing and Remote Access console, access the server properties, navigate to the Security tab, and select “RADIUS Authentication” as the authentication provider. You then add the NPS server as a RADIUS server by specifying its IP address, port (typically 1812 for authentication), and the same shared secret configured on NPS. Once configured, all VPN authentication requests are forwarded to NPS, which evaluates them against configured network policies and returns authentication results to the VPN server.

Option A is incorrect because RADIUS accounting is a separate function that logs connection information (start time, duration, data transferred, etc.) to the RADIUS server for auditing and billing purposes. While you should configure RADIUS accounting for complete VPN session tracking, it doesn’t provide authentication functionality. Authentication and accounting are separate RADIUS services that can be configured independently. You need to configure RADIUS authentication (not just accounting) for user authentication, though best practice is to configure both authentication and accounting.

Option C is incorrect because installing the Network Policy Server role on the VPN server would make that server function as a RADIUS server itself, which isn’t necessary and goes against the distributed architecture implied by the question. The scenario specifies that you need to use authentication “against a Network Policy Server,” indicating that NPS is already deployed on a separate server. Installing NPS on the VPN server would create a combined VPN/NPS server rather than using the existing separate NPS infrastructure. While this configuration is technically possible, it’s not what the question asks for.

Option D is incorrect because connection request policies are configured on the RADIUS server (NPS), not on the VPN server (RADIUS client). Connection request policies on NPS determine how authentication and accounting requests are processed—whether they’re handled locally, forwarded to another RADIUS server, or rejected. The VPN server, acting as a RADIUS client, doesn’t have connection request policies; it simply forwards authentication requests to the configured RADIUS server. Policy evaluation and enforcement happen on the NPS server based on its configured network policies and connection request policies.

Question 108

You manage a Windows Server 2022 file server with DFS Namespaces configured. You need to ensure that when users access a DFS namespace folder, they are automatically directed to the file server in their local Active Directory site. What should you configure?

A) DFS Replication scheduling

B) Namespace server priority ordering method

C) Site-aware DFS referrals

D) DFS folder targets priority

Answer: C

Explanation:

The correct answer is option C. Site-aware DFS referrals, also known as site costing, automatically direct clients to file servers in their own Active Directory site when accessing DFS namespace folders. This feature uses Active Directory site topology information to determine the most appropriate server for each client based on network proximity. When a client requests access to a DFS namespace folder that has targets in multiple sites, the DFS namespace server returns referrals ordered by site cost, with servers in the client’s local site listed first.

Site-aware referrals are enabled by default in domain-based DFS namespaces, but you can verify and configure this behavior through DFS Management. The feature works by examining the client’s site membership (determined by the client’s IP address and Active Directory subnet-to-site mappings) and comparing it with the sites where folder targets are located. Clients always receive referrals to local-site targets first, followed by targets in other sites according to inter-site link costs defined in Active Directory Sites and Services. This automatic site awareness optimizes network utilization, reduces WAN traffic, and improves user experience by directing users to nearby resources.

Option A is incorrect because DFS Replication scheduling controls when file replication occurs between DFS Replication group members. Replication scheduling allows you to limit replication to specific time windows (such as off-peak hours) to manage bandwidth consumption. While replication scheduling is important for controlling when changes propagate between replicated folders, it doesn’t affect how clients are directed to DFS folder targets. Replication scheduling and client referrals are separate aspects of DFS management serving different purposes.

Option B is incorrect because namespace server priority ordering affects which namespace servers clients contact to obtain referrals, not which file servers (folder targets) clients are directed to after receiving referrals. Namespace server ordering determines the sequence in which clients try to contact namespace servers when multiple servers host the same namespace. While you can configure ordering methods for namespace servers (such as lowest cost or random), this doesn’t control site-based routing to folder targets, which is handled by site-aware referrals.

Option D is incorrect because while you can manually configure priority and active/standby status for individual DFS folder targets, this provides static prioritization rather than dynamic site-aware routing. Manual target priority configuration allows you to prefer specific servers regardless of client location, but it doesn’t automatically adapt based on which site the client is in. Site-aware referrals dynamically determine the best target for each client based on their location, whereas manual priority settings apply the same preferences to all clients regardless of their site membership.

Question 109

You have a Windows Server 2022 server that functions as a software-defined storage server using Storage Spaces Direct. You need to add additional capacity to the storage pool. What should you do first?

A) Add physical disks to the server and run Update-StoragePool

B) Create a new storage tier

C) Extend existing virtual disks

D) Add the server to maintenance mode

Answer: A

Explanation:

The correct answer is option A. To expand capacity in a Storage Spaces Direct environment, you must first add physical disks to one or more servers in the cluster, and then update the storage pool to recognize and incorporate the new capacity. The Update-StoragePool cmdlet refreshes the storage pool’s view of available physical disks and automatically includes newly added disks in the pool’s available capacity.

The process involves physically installing new drives in the server (or servers if you’re scaling across multiple nodes), waiting for the system to recognize the new hardware, and then running the PowerShell command Update-StoragePool -FriendlyName “S2D on ClusterName” to incorporate the new disks. Storage Spaces Direct will automatically rebalance data across the expanded pool to optimize performance and capacity utilization. After the pool recognizes the new capacity, you can then extend existing virtual disks or create new virtual disks to utilize the additional space. This approach maintains the health and integrity of the storage pool while seamlessly expanding available capacity.

Option B is incorrect because creating a new storage tier is about defining performance characteristics for data placement (such as creating SSD-based fast tiers and HDD-based capacity tiers), not about adding capacity to the pool. Storage tiers allow you to implement tiered storage strategies where frequently accessed data automatically moves to faster media. While you might create or reconfigure tiers as part of a storage expansion project, creating a tier doesn’t add capacity—you must first add physical disks to the pool to increase available capacity before you can allocate that capacity to tiers.

Option C is incorrect because extending existing virtual disks is something you do after adding capacity to the storage pool, not before. Virtual disks are logical constructs that consume capacity from the underlying storage pool. You cannot extend a virtual disk beyond the available capacity in the pool. The correct sequence is to first add physical disks to increase pool capacity, then update the pool to recognize the new disks, and finally extend virtual disks or create new ones to utilize the expanded capacity. Attempting to extend virtual disks before adding pool capacity would fail.

Option D is incorrect because adding the server to maintenance mode (also called storage maintenance mode) is used when you need to perform maintenance on a server that might temporarily reduce fault tolerance, such as when removing drives, updating firmware, or taking a node offline. Maintenance mode safely migrates data and workloads away from the node to prevent data loss. However, when adding capacity by installing new disks, you don’t need to enter maintenance mode—adding disks is a non-disruptive operation that expands capacity without reducing redundancy or requiring workload migration.

Question 110

You manage a Windows Server 2022 environment with multiple application servers. You need to implement a monitoring solution that generates alerts when CPU usage exceeds 90% for more than 5 minutes. The solution must use native Windows Server tools. What should you configure?

A) Performance Monitor with data collector sets

B) Event Viewer custom views

C) Performance Monitor alerts with scheduled tasks

D) Windows Admin Center threshold alerts

Answer: C

Explanation:

The correct answer is option C. Performance Monitor alerts combined with scheduled tasks provide a native Windows Server solution for monitoring performance thresholds and triggering actions when those thresholds are exceeded. While modern versions of Windows Server have moved away from the legacy alert functionality in Performance Monitor, you can achieve the same result by creating a data collector set that monitors CPU usage, configuring it to run continuously, and then creating a scheduled task that triggers based on specific conditions.

The implementation involves creating a custom data collector set in Performance Monitor that captures the Processor % Processor Time counter, configuring the collection interval appropriately (such as every 15 seconds), and setting up alert thresholds. You then create a scheduled task that monitors for conditions where CPU usage exceeds 90%, uses appropriate triggers, and executes alert actions such as sending emails, writing to event logs, or running scripts. Alternatively, you can use PowerShell scripts scheduled to run periodically that check performance counters using Get-Counter and trigger alerts when thresholds are exceeded for the specified duration.

Option A is incorrect because while data collector sets in Performance Monitor are excellent for collecting performance data over time and creating reports, they don’t inherently provide real-time alerting functionality with the requirement of sustained threshold violations (90% for more than 5 minutes). Data collector sets gather and log performance metrics, but you need additional mechanisms like scheduled tasks or scripts to evaluate that data and generate alerts. Data collector sets alone without alert configuration won’t notify administrators when thresholds are exceeded.

Option B is incorrect because Event Viewer custom views filter and display events from event logs based on specified criteria, but they’re passive viewing tools rather than active monitoring and alerting solutions. Custom views help administrators quickly find relevant events among thousands of log entries, but they don’t monitor performance counters like CPU usage or generate proactive alerts. Event Viewer shows you what has already happened and been logged, whereas the requirement calls for active monitoring that generates alerts when CPU usage exceeds thresholds.

Option D is incorrect because while Windows Admin Center is a modern web-based management tool that provides excellent monitoring capabilities and can display performance data, it’s not primarily designed as a standalone alerting platform for sustained threshold monitoring. Windows Admin Center can show current and historical performance data and might offer some alerting capabilities when connected to Azure Monitor, but for native on-premises alerting using only Windows Server tools, Performance Monitor with scheduled tasks provides more robust and customizable threshold-based alerting capabilities.

Question 111

You have a Windows Server 2022 server running Hyper-V with several virtual machines. You need to configure a virtual machine to use a specific amount of memory that cannot be changed dynamically during runtime. What should you configure?

A) Static memory allocation

B) Dynamic memory with minimum and maximum set to the same value

C) Smart Paging memory

D) Memory weight priority

Answer: A

Explanation:

The correct answer is option A. Static memory allocation assigns a fixed amount of RAM to a virtual machine that remains constant from startup through shutdown. When you configure static memory, you specify the exact amount of memory the VM will use, and this allocation cannot be changed while the VM is running. Static memory is appropriate for workloads that have predictable memory requirements, applications that don’t support dynamic memory well, or when you need guaranteed memory allocation that won’t be adjusted by Hyper-V’s dynamic memory management.

To configure static memory, you access the virtual machine settings, navigate to the Memory section, and ensure that “Enable Dynamic Memory” is unchecked. You then specify the startup memory value, which becomes the fixed memory allocation for the VM. With static memory, the specified amount of physical memory is fully allocated to the virtual machine when it starts, and this allocation remains constant throughout the VM’s runtime. This provides predictable performance but reduces flexibility in memory utilization compared to dynamic memory.

Option B is incorrect because while setting dynamic memory with minimum and maximum values to the same amount might seem like it would create a fixed memory allocation, dynamic memory still operates differently from static memory under the hood. Even with equal min/max values, Hyper-V’s dynamic memory management components remain active, the VM still requires dynamic memory drivers, and there’s still overhead from the dynamic memory management process. Additionally, some applications that detect dynamic memory may behave differently than when they detect static memory. True static memory allocation is achieved by disabling dynamic memory, not by constraining it to a single value.

Option C is incorrect because Smart Paging is a Hyper-V feature used specifically during virtual machine startup when dynamic memory is enabled and memory is overcommitted. If a VM with dynamic memory configured needs more memory than its minimum allocation to start up, but insufficient physical memory is available, Smart Paging temporarily uses disk-based paging to supplement memory during the startup phase. Smart Paging is not a memory allocation method—it’s a fallback mechanism to handle startup memory pressure, and it only applies to VMs using dynamic memory, not static memory.

Option D is incorrect because memory weight priority is a setting used when multiple virtual machines with dynamic memory are competing for a limited pool of physical memory. Memory weight determines which VMs have priority when Hyper-V must decide how to distribute available memory among VMs. Higher weight values give VMs higher priority for memory allocation. Memory weight doesn’t create fixed memory allocations—it influences how dynamic memory is distributed during contention. The question requires a configuration that prevents memory from changing dynamically, which is achieved through static memory, not through priority weighting.

Question 112

You manage a Windows Server 2022 environment with Active Directory Domain Services. You need to delegate the ability to reset user passwords and unlock accounts to the help desk team for a specific organizational unit without granting additional permissions. What should you do?

A) Add help desk users to the Account Operators group

B) Use the Delegation of Control Wizard to grant specific permissions on the OU

C) Add help desk users to the Domain Admins group temporarily

D) Configure fine-grained password policies for the OU

Answer: B

Explanation:

The correct answer is option B. The Delegation of Control Wizard in Active Directory Users and Computers provides a straightforward method to grant specific administrative permissions to users or groups for a particular organizational unit. This wizard allows you to delegate only the permissions needed—in this case, resetting user passwords and unlocking accounts—without granting broader administrative rights.

To use the Delegation of Control Wizard, you right-click the target organizational unit, select “Delegate Control,” add the help desk security group or users, and then select the specific tasks to delegate. The wizard includes predefined common tasks such as “Reset user passwords and force password change at next logon” and options for creating custom task delegations. After completing the wizard, members of the help desk team can perform only the delegated operations on user objects within that OU, adhering to the principle of least privilege. This approach provides granular permission management and maintains security by limiting administrative capabilities to only what’s necessary for the help desk role.

Option A is incorrect because the Account Operators group is a built-in domain local group with broad permissions across the entire domain, not limited to a specific OU. Members of Account Operators can create, modify, and delete user accounts, groups, and computers in most containers throughout the domain (except in protected containers like Domain Controllers). Granting Account Operators membership provides far more permissions than required and violates the principle of least privilege. Additionally, Account Operators permissions apply domain-wide, not just to the specific OU mentioned in the requirement.

Option C is incorrect because adding users to the Domain Admins group, even temporarily, grants full administrative control over the entire domain—an excessive permission level for help desk personnel who only need to reset passwords and unlock accounts in one OU. Domain Admins have unrestricted access to all domain resources, domain controllers, and Active Directory objects. Granting such elevated privileges creates significant security risks, and the “temporary” nature doesn’t mitigate the exposure during the time those permissions are active. This approach grossly violates the principle of least privilege and represents a security anti-pattern.

Option D is incorrect because fine-grained password policies (Password Settings Objects) are used to apply different password and account lockout policies to different groups of users within a domain. Fine-grained password policies control password complexity requirements, minimum password length, lockout thresholds, and similar settings for targeted user populations. These policies don’t delegate administrative permissions or grant users the ability to reset passwords for others. Fine-grained password policies are about enforcing password requirements on users, not about granting administrative capabilities to help desk staff.

Question 113

You have a Windows Server 2022 DHCP server managing IP address allocation for your network. You need to ensure that specific devices always receive the same IP address based on their MAC addresses, but you want these addresses to be outside the normal DHCP scope range. What should you configure?

A) DHCP reservations

B) DHCP exclusion ranges

C) DHCP filters

D) Static IP addresses on the client devices

Answer: A

Explanation:

The correct answer is option A. DHCP reservations allow you to assign specific IP addresses to devices based on their MAC addresses while still maintaining centralized management through the DHCP server. When you create a reservation, you specify the MAC address of the device and the IP address it should always receive. The reserved IP address can be within the scope range or outside of it, but still within the subnet managed by the DHCP server.

To configure reservations, you access the DHCP console, navigate to the appropriate scope, right-click on Reservations, and create a new reservation by entering the device name, IP address, and MAC address. The key advantage of reservations over static IP configuration is that all IP address assignments remain centrally managed in DHCP, making it easier to track, modify, and troubleshoot address allocations. Reserved addresses are excluded from normal DHCP lease assignments to other clients, ensuring that only the device with the specified MAC address receives that particular IP address. This approach provides the predictability of static addressing with the manageability benefits of DHCP.

Option B is incorrect because DHCP exclusion ranges define IP addresses within a scope that the DHCP server should not assign to any clients. Exclusions are typically used to set aside addresses for servers, network devices, or other systems that need static IP addresses configured manually. While exclusions prevent addresses from being assigned dynamically, they don’t create any association between devices and specific IP addresses. Exclusions simply mark certain addresses as unavailable for dynamic assignment—they don’t ensure that specific devices receive specific addresses based on MAC addresses.

Option C is incorrect because DHCP filters (allow and deny filters) control which devices can or cannot obtain IP addresses from the DHCP server based on their MAC addresses. Allow filters create a whitelist where only specified MAC addresses can receive DHCP leases, while deny filters create a blacklist that prevents specified MAC addresses from obtaining leases. Filters control access to DHCP services but don’t assign specific IP addresses to specific devices. A device passing through an allow filter would still receive any available address from the scope, not a predetermined specific address.

Option D is incorrect because configuring static IP addresses directly on client devices would achieve the goal of devices having consistent IP addresses, but it removes those devices from centralized DHCP management. With static configuration on devices, you lose the ability to centrally track IP assignments, easily modify configurations, and maintain consistent DNS and gateway settings through DHCP options. Additionally, manually configuring static IPs on numerous devices is more time-consuming and error-prone than managing reservations from a central DHCP server. The question specifically mentions a DHCP server context, suggesting a DHCP-based solution is preferred.

Question 114

You manage a Windows Server 2022 environment with multiple branch offices connected via slow WAN links. You need to implement a solution that caches frequently accessed files from the central file server at branch locations to improve access times. Which feature should you implement?

A) BranchCache in distributed cache mode

B) DFS Replication

C) Storage Replica

D) Work Folders

Answer: A

Explanation:

The correct answer is option A. BranchCache in distributed cache mode is specifically designed to improve access times for frequently accessed content from central servers over slow WAN links in branch office scenarios. In distributed cache mode, client computers at the branch office cache content locally and share it with other clients on the same subnet, creating a peer-to-peer caching system without requiring dedicated branch office servers.

When a branch office client requests a file from the central file server, BranchCache retrieves the file over the WAN link and caches it locally. When another client at the same branch requests the same file, BranchCache locates the cached copy on the first client’s computer and retrieves it over the fast local network rather than crossing the WAN link again. This dramatically reduces WAN bandwidth consumption and improves file access performance for branch users. BranchCache supports multiple protocols including HTTP/HTTPS and SMB, works transparently to users and applications, and requires no changes to existing file server infrastructure beyond enabling the BranchCache feature and configuring appropriate Group Policy settings.

Option B is incorrect because DFS Replication provides multi-master replication of entire folder structures between servers, not client-side caching of frequently accessed files. While DFSR can improve branch office performance by replicating file shares to branch servers so users access local copies, it requires servers at each branch location and replicates entire folder hierarchies rather than selectively caching only frequently accessed content. DFSR is a server-to-server solution that provides complete copies of data, whereas BranchCache is a client-side solution that caches content on-demand based on actual access patterns.

Option C is incorrect because Storage Replica is a block-level replication technology designed for disaster recovery and high availability scenarios. Storage Replica synchronously or asynchronously replicates entire volumes between servers or clusters, typically for the purpose of maintaining standby copies for failover. It’s not designed for improving branch office file access performance through caching—it’s focused on data protection and site resilience. Storage Replica requires significant bandwidth for initial synchronization and ongoing replication, which wouldn’t be appropriate for optimizing access over slow WAN links.

Option D is incorrect because Work Folders is a feature that allows users to synchronize their work files across multiple devices (PCs, tablets, smartphones) while maintaining corporate control over the data. Work Folders provides user-centric file synchronization similar to consumer cloud storage services but hosted on corporate infrastructure. While Work Folders does cache files locally on devices for offline access, it’s designed for mobile and remote user scenarios where individuals sync their personal work files, not for optimizing branch office access to shared file servers for multiple users.

Question 115

You have a Windows Server 2022 server running the DHCP Server role. You need to configure DHCP to provide different gateway addresses to clients based on their MAC address vendor prefix. What should you implement?

A) DHCP policies

B) DHCP scope options

C) DHCP server options

D) DHCP user classes

Answer: A

Explanation:

The correct answer is option A. DHCP policies in Windows Server 2012 and later provide advanced condition-based IP address assignment and option delivery. DHCP policies allow you to configure different DHCP settings for clients based on various criteria including MAC address, vendor class, user class, client identifier, and more. For this scenario, you would create a DHCP policy that uses the MAC address prefix as a condition to identify devices from specific vendors and then assigns different gateway addresses to those devices.

To implement this, you create policies at either the server level or scope level in the DHCP console. You define conditions based on MAC address patterns that match the vendor prefix (the first three octets of MAC addresses identify the manufacturer), and then configure policy-specific DHCP options including the router (gateway) option with different values for different vendor groups. When clients request DHCP leases, the server evaluates them against policy conditions and delivers the appropriate options based on which policies match. This provides tremendous flexibility for heterogeneous environments where different device types or manufacturers need different network configurations.

Option B is incorrect because standard DHCP scope options apply uniformly to all clients receiving addresses from that scope. Scope options don’t provide conditional logic based on client characteristics like MAC address prefixes. While scope options are essential for delivering standard configuration parameters like gateways, DNS servers, and domain names to clients, they apply to all clients in the scope without discrimination. To provide different gateway addresses to different clients based on identifying characteristics, you need the conditional capabilities provided by DHCP policies.

Option C is incorrect because DHCP server options apply to all scopes on the DHCP server unless overridden by scope-level or policy-level options. Server options provide default values for settings that should be consistent across the entire DHCP server, such as DNS servers used throughout the organization. Like scope options, server options lack conditional logic and apply uniformly to clients. Server options cannot differentiate between clients based on MAC address prefixes or any other client attributes, so they cannot provide different gateway addresses to different vendor devices.

Option D is incorrect because DHCP user classes are identifiers that clients send in their DHCP requests to indicate they belong to a particular group, but they’re client-asserted values rather than server-identified conditions based on MAC addresses. User classes require clients to be configured to send specific class identifiers, and they’re typically used for administratively defined groupings. While you can configure different DHCP options for different user classes, this approach doesn’t work for identifying devices by MAC address vendor prefix—that requires DHCP policies with MAC address-based conditions that the server evaluates automatically.

Question 116

You manage a Windows Server 2022 environment with Network Policy Server (NPS) configured for 802.1X authentication. You need to configure the NPS to allow network access only during business hours (8 AM to 6 PM, Monday through Friday). Where should you configure this restriction?

A) Network policy conditions

B) Network policy constraints

C) Connection request policy conditions

D) RADIUS client properties

Answer: B

Explanation:

The correct answer is option B. Network policy constraints in NPS allow you to configure restrictions that must be satisfied for network access to be granted, including time-of-day restrictions. Constraints are evaluated after conditions have been met—if a connection request matches the policy conditions, the constraints are then checked to determine if access should be allowed. The “Day and time restrictions” constraint allows you to specify precisely when users are permitted to connect based on day of week and time of day.

To configure this, you open the Network Policy Server console, navigate to the appropriate network policy, access the policy properties, and go to the Constraints tab. There you’ll find the “Day and time restrictions” option where you can configure allowed connection times using a graphical schedule interface. You specify that connections are only permitted Monday through Friday from 8:00 AM to 6:00 PM. When users attempt to authenticate outside these hours, NPS will deny their connection requests even if all other authentication requirements are met. This time-based access control helps enforce security policies and can reduce unauthorized after-hours network access.

Option A is incorrect because network policy conditions determine whether a connection request matches the policy and should be evaluated by it. Conditions include criteria like user groups, machine groups, authentication types, NAS port types, and client IP addresses. Conditions are matching criteria that determine policy selection, not restrictions that deny access when violated. While there are many conditions available, time-of-day restrictions are implemented as constraints, not conditions. Conditions select which policy applies; constraints determine whether access is granted under that policy.

Option C is incorrect because connection request policy conditions are used to match incoming RADIUS requests to determine how they should be processed (handled locally, forwarded to another RADIUS server, or rejected). Connection request policies operate at a higher level than network policies—they determine request routing and processing before authentication occurs. Connection request policy conditions include criteria like user name, calling station ID, and client IP address, but they’re focused on request routing rather than imposing time-based access restrictions. Time-of-day restrictions are imposed through network policy constraints.

Option D is incorrect because RADIUS client properties in NPS define the VPN servers, wireless access points, switches, or other network access servers that send authentication requests to NPS. RADIUS client configuration includes the client’s IP address or hostname, shared secret for secure communication, and vendor-specific attributes. RADIUS client properties control the trust relationship between NPS and network access servers but don’t impose restrictions on end-user connections. Time-based access control for users is configured in network policies, not in the configuration of RADIUS clients themselves.

Question 117

You have a Windows Server 2022 Hyper-V host with multiple virtual machines running production workloads. You need to perform maintenance on the host without shutting down the virtual machines. The environment has a failover cluster. What should you do?

A) Use Quick Migration to move VMs to another host

B) Pause all virtual machines

C) Drain roles from the cluster node using cluster-aware maintenance mode

D) Export virtual machines to another location

Answer: C

Explanation:

The correct answer is option C. Draining roles from a cluster node, also called putting the node into maintenance mode, is the proper method for performing maintenance on a clustered Hyper-V host without downtime. When you drain a node, the failover cluster orchestrates live migration of all running virtual machines from that node to other available nodes in the cluster, ensuring continuous operation of all workloads without service interruption.

To drain a node, you can use Failover Cluster Manager by right-clicking the node and selecting “Pause” then “Drain Roles,” or use PowerShell with the Suspend-ClusterNode cmdlet with the -Drain parameter. The cluster automatically live migrates all VMs to other hosts based on available resources and placement policies. Once all roles are migrated and the node shows as drained, you can safely perform maintenance activities like applying updates, replacing hardware, or troubleshooting without affecting production workloads. After maintenance completes, you resume the node, and the cluster makes it available for hosting workloads again, potentially migrating some VMs back for load balancing.

Option A is incorrect because Quick Migration is an older migration technology that briefly pauses virtual machines, saves their state to disk, transfers the state files to another host, and then resumes the VMs on the destination. Quick Migration causes a brief interruption (typically several seconds to minutes depending on VM memory size) during which the virtual machines are unavailable. This doesn’t meet the requirement of performing maintenance “without shutting down the virtual machines” in the sense of maintaining continuous availability. Live Migration, which occurs automatically during the drain operation, provides seamless migration without VM downtime.

Option B is incorrect because pausing virtual machines suspends their execution and freezes their state in memory but leaves them running on the same host. Paused VMs don’t consume CPU cycles but still occupy memory and remain on the host. If you perform maintenance on the host while VMs are paused, you risk data loss or corruption if the maintenance requires a reboot or affects running processes. Pausing VMs doesn’t move them to another host, so it doesn’t enable maintenance without downtime. Additionally, paused VMs appear offline to users and applications, which constitutes an outage.

Option D is incorrect because exporting virtual machines creates offline copies of VM configuration files and virtual hard disks to another storage location. Export is a backup and portability operation that requires virtual machines to be in a shutdown or saved state—you cannot export running VMs. Even with the “export snapshot” option, the VMs must be momentarily paused during the snapshot creation, and the export process doesn’t relocate running VMs to another host. Export is used for VM backup, migration between non-clustered environments, or disaster recovery preparation, not for maintenance operations in clustered environments.

Question 118

You manage a Windows Server 2022 environment with Active Directory Certificate Services. You need to configure the Certificate Authority to automatically revoke certificates when user accounts are disabled in Active Directory. What should you configure?

A) Certificate template security permissions

B) CRL distribution points

C) Certificate revocation policy

D) Certificate auto-enrollment settings

Answer: C

Explanation:

The correct answer is option C. While Windows Server doesn’t natively include an out-of-the-box feature that automatically revokes certificates when user accounts are disabled, implementing a certificate revocation policy through custom scripting or third-party tools represents the conceptual approach to this requirement. However, in the context of available options and typical enterprise implementations, organizations create automated processes that monitor Active Directory account status changes and trigger certificate revocation through the CA’s administrative interfaces or APIs.

The most practical implementation involves creating scheduled tasks or event-triggered scripts that query Active Directory for disabled accounts, cross-reference those accounts against issued certificates, and programmatically revoke matching certificates using certutil commands or CA management interfaces. Some organizations implement this through System Center Orchestrator, PowerShell workflows, or custom scripts that call the ICertAdmin interface. The revocation policy encompasses the rules and automation that govern when and how certificates should be revoked based on account status changes, effectively linking identity lifecycle management with certificate lifecycle management.

Option A is incorrect because certificate template security permissions control who can enroll for certificates based on that template and which CA certificate managers can issue certificates from that template. Template permissions define enrollment rights, autoenrollment eligibility, and read/write access to template properties. While proper security permissions are fundamental to certificate management, they don’t provide functionality for automatically revoking certificates when accounts are disabled. Permissions are about access control during issuance, not automated lifecycle management after issuance.

Option B is incorrect because CRL (Certificate Revocation List) distribution points specify where clients can download current revocation information to check whether certificates have been revoked. CRL DPs are URLs included in issued certificates that point to locations where updated CRLs are published. While CRLs are essential for distributing revocation information after certificates have been revoked, configuring CRL distribution points doesn’t create any automation for revoking certificates when accounts are disabled. CRL DPs are about publishing revocation information, not triggering revocations based on directory events.

Option D is incorrect because certificate auto-enrollment settings configure automatic enrollment and renewal of certificates for users and computers based on Group Policy and certificate template configurations. Auto-enrollment ensures that entities automatically receive certificates they’re entitled to without manual intervention. However, auto-enrollment doesn’t include functionality for automatically revoking certificates when accounts are disabled. Auto-enrollment focuses on certificate issuance and renewal, not revocation based on account status changes. These are separate lifecycle phases with different management mechanisms.

Question 119

You have a Windows Server 2022 file server with several shared folders containing sensitive documents. You need to implement a solution that tracks all file access, modifications, and deletions for auditing purposes. What should you configure?

A) File Server Resource Manager file screens

B) Advanced Audit Policy Configuration for object access

C) NTFS permissions auditing

D) Windows Defender Application Control

Answer: B

Explanation:

The correct answer is option B. Advanced Audit Policy Configuration provides granular auditing capabilities for tracking detailed file system activities including file access, modifications, and deletions. Specifically, you need to configure the “Audit File System” subcategory under “Object Access” in Advanced Audit Policy, combined with configuring auditing on the NTFS file system objects themselves through their security properties.

To implement comprehensive file auditing, you first enable the appropriate audit policies through Group Policy under Computer Configuration > Windows Settings > Security Settings > Advanced Audit Policy Configuration > Object Access > Audit File System. You configure this to audit both success and failure events. Then, on the file server, you configure SACL (System Access Control List) entries on the folders and files you want to audit by accessing their security properties, navigating to the Auditing tab, and specifying which users or groups and which actions (read, write, delete, change permissions, etc.) should be audited. Once configured, all specified file operations generate security audit events in the Security event log, providing a comprehensive audit trail of file access and modifications.

Option A is incorrect because File Server Resource Manager file screens are used to block users from saving unauthorized file types to specific locations based on file extensions. File screens are preventive controls that restrict which types of files can be stored (such as preventing executable files or media files from being saved to user directories), but they don’t provide auditing or tracking of file access and modifications. File screens are about blocking unwanted file types, not monitoring access to allowed files. FSRM can generate reports on file usage, but these don’t provide the real-time auditing and detailed tracking required for security compliance.

Option C is incorrect while it’s close to the right answer, but it’s incomplete. NTFS permissions auditing refers to configuring SACL entries on files and folders to specify what should be audited, which is indeed part of the solution. However, NTFS auditing alone isn’t sufficient—you must also enable the corresponding audit policies through Group Policy (Advanced Audit Policy Configuration) for the audit events to actually be generated and logged. The answer “Advanced Audit Policy Configuration for object access” is more complete because it encompasses both enabling the policy and the subsequent NTFS auditing configuration required for comprehensive file access tracking.

Option D is incorrect because Windows Defender Application Control (formerly known as Device Guard) is a code integrity and application control solution that restricts which applications and scripts can run on Windows systems. WDAC creates policies that allow only trusted applications to execute, helping prevent malware and unauthorized software from running. While WDAC enhances security, it doesn’t provide file access auditing capabilities. WDAC is about controlling application execution, not monitoring or tracking file system operations like reads, writes, and deletions of documents.

Question 120

You manage a Windows Server 2022 environment with multiple web applications hosted in IIS. You need to isolate each web application so that if one application crashes or becomes compromised, it doesn’t affect other applications. What should you configure?

A) Application pool isolation with different identities

B) Website bindings with different ports

C) IIS request filtering

D) Web gardens with multiple worker processes

Answer: A

Explanation:

The correct answer is option A. Application pool isolation is the primary IIS feature for isolating web applications from each other. Each application pool runs in its own worker process (w3wp.exe) with its own memory space, and you can configure each pool to run under different security identities. By placing each web application in its own dedicated application pool with unique identity credentials, you ensure that crashes, memory leaks, or security compromises in one application don’t affect applications in other pools.

To implement this isolation, you create separate application pools for each web application in IIS Manager, configure each pool with appropriate settings (process model, recycling, limits), and assign different security identities to each pool using the ApplicationPoolIdentity or custom service accounts. Then you assign each web application to its dedicated application pool. When an application crashes or experiences errors, only its worker process terminates and recycles, leaving other applications running normally in their own processes. This isolation also provides security benefits because each application runs with minimal permissions under its own identity, limiting the potential damage if an application is compromised. Application pool isolation represents a fundamental best practice for IIS security and stability.

Option B is incorrect because while configuring website bindings with different ports allows multiple websites to coexist on the same server and be accessed through different port numbers, it doesn’t provide process-level isolation. Websites bound to different ports can still share the same application pool and run in the same worker process, meaning a crash in one could affect others. Port-based separation is about network accessibility and URL addressing, not about process isolation or security boundaries. Multiple websites sharing an application pool remain vulnerable to each other’s failures regardless of which ports they listen on.

Option C is incorrect because IIS request filtering is a security feature that screens incoming HTTP requests and blocks those that match specified criteria such as suspicious URLs, specific file extensions, request limits, or known attack patterns. Request filtering helps protect web applications from common attacks like SQL injection or directory traversal, but it doesn’t isolate applications from each other. Request filtering is a preventive security control that operates at the HTTP request level, not a process isolation mechanism. It helps prevent attacks but doesn’t contain the impact if an application crashes or is compromised.

Option D is incorrect because web gardens involve configuring an application pool to use multiple worker processes instead of just one, allowing a single application to spread its workload across multiple processes for better performance on multi-core systems. While web gardens provide some fault tolerance (if one worker process crashes, others continue serving requests), they’re designed for load distribution within a single application, not for isolating different applications from each other. Web gardens still represent a single application pool shared by one application, whereas the requirement calls for isolating multiple different applications from each other, which requires separate application pools.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!