Visit here for our full Microsoft AZ-800 exam dumps and practice test questions.
Question 161
You have a Windows Server 2022 server running the Web Server (IIS) role with multiple websites. You need to configure IIS to automatically start a specific application pool before any user requests are received. What should you configure?
A) Application pool start mode to AlwaysRunning
B) Application pool idle timeout
C) Application pool recycling schedule
D) Web garden configuration
Answer: A
Explanation:
The correct answer is option A. Setting the application pool start mode to “AlwaysRunning” ensures that the application pool starts automatically when IIS starts, rather than waiting for the first user request. This eliminates cold start delays and ensures applications are immediately available to serve requests without initialization wait times.
By default, application pools use “OnDemand” start mode, which means they only start when the first request arrives. This causes delays for initial users while the application initializes. With “AlwaysRunning” mode, the worker process starts during IIS service startup, loading assemblies, initializing frameworks, and warming up the application. You can further enhance this by enabling application preload (preloadEnabled=true), which makes actual requests to the application during startup to fully initialize it. This combination provides optimal user experience by ensuring applications are ready immediately.
Option B is incorrect because idle timeout controls when application pools shut down after periods of inactivity, not when they start. Idle timeout conserves resources by stopping unused pools but doesn’t address startup behavior.
Option C is incorrect because recycling schedules control when application pools restart for maintenance purposes (memory cleanup, configuration updates). While recycling affects availability, it doesn’t control initial startup behavior or eliminate cold start delays.
Option D is incorrect because web gardens configure multiple worker processes for a single pool to improve performance on multi-core systems, not to control startup timing. Web gardens distribute load but don’t affect when pools start relative to user requests.
Question 162
You manage a Windows Server 2022 environment with multiple Hyper-V hosts in a failover cluster. You need to configure anti-affinity rules to prevent specific virtual machines from running on the same host simultaneously. What should you configure?
A) Cluster anti-affinity class sets
B) Virtual machine priority settings
C) Preferred owners list
D) Cluster node weights
Answer: A
Explanation:
The correct answer is option A. Cluster anti-affinity class sets allow you to define groups of virtual machines that should not run on the same host simultaneously. This ensures fault isolation by preventing related VMs (like multiple domain controllers or redundant application servers) from sharing a single point of failure.
To configure anti-affinity, you use PowerShell to create anti-affinity class names and assign them to VM cluster resources. VMs with the same anti-affinity class name will be distributed across different cluster nodes. For example, if you have three domain controllers, assigning them the same anti-affinity class ensures they run on different hosts. During initial placement and failover operations, the cluster respects these anti-affinity rules, maximizing availability by preventing co-location of redundant services.
Option B is incorrect because virtual machine priority settings determine which VMs start first during cluster startup or recovery, not where they run relative to each other. Priority affects startup sequence but doesn’t prevent VMs from running on the same host.
Option C is incorrect because preferred owners specify which cluster nodes a resource prefers to run on, essentially creating affinity rather than anti-affinity. Preferred owners encourage co-location, which is opposite to the requirement.
Option D is incorrect because cluster node weights influence quorum calculations and voting in cluster decisions, not VM placement. Node weights affect cluster consensus mechanisms but don’t control which VMs run on which hosts.
Question 163
You have a Windows Server 2022 DNS server authoritative for your domain. You need to configure the DNS server to prevent it from responding to recursive queries from external networks while still allowing recursion for internal clients. What should you configure?
A) Disable recursion for specific subnets using DNS policies
B) Remove root hints
C) Configure forwarders only
D) Disable recursion globally
Answer: A
Explanation:
The correct answer is option A. DNS policies allow you to configure conditional recursion based on client subnet, enabling you to permit recursive queries from internal networks while denying them from external sources. This protects your DNS server from being exploited as an open resolver in amplification attacks while maintaining full functionality for authorized clients.
You create DNS query resolution policies using PowerShell that evaluate the source subnet of incoming queries. Internal subnets receive normal recursive resolution, while external queries are denied recursion or ignored entirely. This granular control ensures that only authorized clients can use your DNS server for resolution beyond zones you host authoritatively. The implementation prevents DNS abuse while preserving internal functionality, representing a security best practice for Internet-facing DNS servers.
Option B is incorrect because removing root hints only affects how the DNS server performs iterative resolution starting from root servers. Without root hints, recursion still functions but relies on forwarders. This doesn’t prevent external clients from using your server recursively.
Option C is incorrect because configuring forwarders determines where your DNS server sends queries it cannot answer authoritatively, not who can request recursion. Forwarders optimize resolution but don’t restrict which clients can use recursive services.
Option D is incorrect because disabling recursion globally prevents all clients, including internal ones, from using your DNS server for recursive resolution. This breaks internal name resolution for domains you don’t host, contradicting the requirement to maintain recursion for internal clients.
Question 164
You manage a Windows Server 2022 file server with Data Deduplication enabled. You need to verify the space savings achieved through deduplication and monitor the deduplication status. What should you use?
A) Get-DedupStatus PowerShell cmdlet
B) Performance Monitor counters
C) File Server Resource Manager reports
D) Storage Spaces Direct health service
Answer: A
Explanation:
The correct answer is option A. The Get-DedupStatus PowerShell cmdlet provides comprehensive information about Data Deduplication status including space savings, optimization rate, deduplication percentage, and last optimization time. This cmdlet is the primary tool for monitoring deduplication effectiveness and health.
Running Get-DedupStatus displays key metrics: SavedSpace shows actual storage savings achieved, OptimizedFilesSavingsRate indicates the percentage of space saved, OptimizedFilesCount reveals how many files have been optimized, and InPolicyFilesCount shows eligible files. The LastOptimizationResult indicates whether recent operations succeeded. This information helps administrators assess deduplication effectiveness, justify storage investments, and troubleshoot issues. You can run this cmdlet regularly or incorporate it into monitoring scripts for ongoing visibility into deduplication operations and benefits.
Option B is incorrect because while Performance Monitor has some deduplication-related counters for real-time monitoring, it doesn’t provide the comprehensive summary statistics and space savings information that Get-DedupStatus offers. Performance counters track operational metrics but lack the high-level reporting needed to assess overall effectiveness.
Option C is incorrect because File Server Resource Manager reports focus on file screening, quotas, and storage usage patterns, not specifically on Data Deduplication metrics. FSRM and deduplication are separate features with different reporting tools.
Option D is incorrect because Storage Spaces Direct health service monitors storage infrastructure health in software-defined storage environments, not Data Deduplication operations. While both relate to storage, they serve different purposes and have separate monitoring mechanisms.
Question 165
You have a Windows Server 2022 server running Active Directory Certificate Services. You need to configure the CA to automatically approve certificate requests for domain computers. What should you configure?
A) Certificate template permissions and autoenrollment
B) Certificate Manager approval queue
C) CRL distribution points
D) Certificate Enrollment Web Services
Answer: A
Explanation:
The correct answer is option A. To enable automatic certificate issuance for domain computers, you must configure appropriate permissions on certificate templates and enable autoenrollment through Group Policy. The template permissions determine which security principals can enroll for certificates, while autoenrollment automates the request and installation process.
You duplicate an appropriate certificate template (like Computer), configure permissions to allow “Domain Computers” with Enroll and Autoenroll rights, and publish the template to the CA. Then configure Group Policy under Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Certificate Services Client – Auto-Enrollment to enable automatic enrollment. When computers refresh Group Policy, they automatically request certificates from templates they’re authorized for, receive them without administrator intervention, and automatically renew them before expiration. This seamless process ensures computers maintain valid certificates for authentication, encryption, and other security functions.
Option B is incorrect because the Certificate Manager approval queue is for manually approving pending certificate requests when templates require manager approval. Automatic approval means requests are immediately issued without entering the approval queue, making manual approval unnecessary.
Option C is incorrect because CRL distribution points specify where clients download revocation information, not how certificates are issued. CDPs enable clients to check certificate validity but don’t affect the enrollment or approval process.
Option D is incorrect because Certificate Enrollment Web Services provide web-based enrollment capabilities for non-domain devices or scenarios where traditional autoenrollment isn’t available. For domain computers, native autoenrollment through Group Policy is simpler and more appropriate than web enrollment.
Question 166
You manage a Windows Server 2022 Hyper-V environment with several virtual machines. You need to limit the network bandwidth available to a specific virtual machine to prevent it from saturating the network connection. What should you configure?
A) Virtual machine network adapter bandwidth management
B) Quality of Service (QoS) policies
C) Network adapter teaming
D) Virtual switch port mirroring
Answer: A
Explanation:
The correct answer is option A. Hyper-V provides built-in bandwidth management capabilities on virtual network adapters that allow you to set minimum and maximum bandwidth limits per virtual machine. This feature prevents individual VMs from monopolizing network resources and enables fair bandwidth distribution across multiple workloads.
You configure bandwidth management in the VM settings under network adapter properties. The maximum bandwidth setting caps the network throughput the VM can achieve, measured in Mbps. The minimum bandwidth setting (weight-based) ensures the VM receives a guaranteed share of network capacity during contention. This granular control prevents noisy neighbor problems where one VM’s network activity degrades performance for others. Bandwidth management operates at the virtual switch level and is enforced by the Hyper-V host, making it transparent to guest operating systems. This approach provides effective network resource management without requiring guest-level configuration or third-party tools.
Option B is incorrect because while QoS policies can control network traffic priority and bandwidth, they’re typically implemented at the network infrastructure level (routers, switches) or through Windows Group Policy affecting local network stacks. Hyper-V’s native bandwidth management provides more direct and simpler control for VM network limiting.
Option C is incorrect because network adapter teaming combines multiple physical NICs for redundancy and increased bandwidth, not for limiting individual VM bandwidth. Teaming improves capacity and availability but doesn’t restrict specific VMs.
Option D is incorrect because port mirroring copies network traffic from one port to another for monitoring or analysis purposes, not for bandwidth limiting. Port mirroring is a diagnostic feature unrelated to bandwidth management or traffic control.
Question 167
You have a Windows Server 2022 environment with Active Directory Domain Services. You need to configure a Group Policy setting that applies only to laptop computers, not desktop computers. What should you implement?
A) WMI filtering on the Group Policy Object
B) Security filtering based on computer groups
C) OU-based GPO linking
D) Group Policy loopback processing
Answer: A
Explanation:
The correct answer is option A. WMI (Windows Management Instrumentation) filtering allows you to apply Group Policy Objects based on WMI query results, enabling conditional application based on hardware characteristics. You can create WMI filters that query chassis type or battery presence to distinguish laptops from desktops.
To implement this, you create a WMI filter in Group Policy Management using a query like “SELECT * FROM Win32_SystemEnclosure WHERE ChassisTypes = 9 OR ChassisTypes = 10” (where 9=Laptop, 10=Notebook). You then link this filter to your GPO, and the policy only applies to computers matching the query. This enables laptop-specific settings like power management, VPN configurations, or offline files without affecting desktops. WMI filtering provides flexible, attribute-based targeting beyond simple OU or group membership, allowing policies based on hardware characteristics, operating system versions, or other system properties.
Option B is incorrect because while security filtering based on computer groups works, it requires manually maintaining group membership for all laptops. WMI filtering automatically identifies laptops based on hardware characteristics without manual group management, providing more scalable and maintainable solution.
Option C is incorrect because OU-based linking requires organizing laptops and desktops into separate OUs, which may conflict with other organizational structures based on departments or locations. WMI filtering allows targeting based on device type regardless of OU placement.
Option D is incorrect because loopback processing changes how user policies apply, using computer location instead of user location. It doesn’t distinguish between device types like laptops versus desktops.
Question 168
You manage a Windows Server 2022 DHCP server providing IP addresses to client computers. You need to configure the DHCP server to register DNS records on behalf of clients that don’t support dynamic DNS updates. What should you configure?
A) Enable “Always dynamically update DNS records” in DHCP scope properties
B) Configure DNS suffix in DHCP options
C) Enable DHCP-DNS integration
D) Configure WINS settings
Answer: A
Explanation:
The correct answer is option A. Configuring the DHCP server to “Always dynamically update DNS A and PTR records” ensures that the DHCP server registers both forward (A) and reverse (PTR) DNS records on behalf of all clients, including those that don’t support dynamic DNS updates or have it disabled.
In the DHCP scope properties under the DNS tab, you have three options: never update DNS records, update only if requested by clients, or always update records regardless of client capabilities. Selecting “Always dynamically update DNS A and PTR records” makes the DHCP server responsible for all DNS registration, ensuring that even legacy devices, network appliances, or systems with disabled dynamic updates have their DNS records maintained. This centralized approach provides consistent DNS registration across heterogeneous environments. The DHCP server authenticates to DNS using its own credentials when updating records, which requires appropriate permissions in DNS zones.
Option B is incorrect because configuring DNS suffix in DHCP options (Option 15) tells clients which domain suffix to use but doesn’t cause the DHCP server to register DNS records. Clients still need to register themselves or have the DHCP server configured to register on their behalf.
Option C is incorrect because “DHCP-DNS integration” isn’t a specific setting but rather the general concept of DHCP and DNS working together. The specific configuration needed is the dynamic update setting in scope properties.
Option D is incorrect because WINS (Windows Internet Name Service) is a legacy NetBIOS name resolution service unrelated to DNS registration. WINS and DNS serve different naming systems, and WINS configuration doesn’t affect DNS record registration.
Question 169
You have a Windows Server 2022 file server with multiple shared folders. You need to implement a solution that prevents users from saving executable files to specific shared folders. What should you configure?
A) File Server Resource Manager file screens with executable file group
B) NTFS permissions denying execute access
C) Windows Defender Application Control
D) Share permissions
Answer: A
Explanation:
The correct answer is option A. File Server Resource Manager file screens provide the capability to block specific file types from being saved to designated folders based on file extensions. The built-in “Executable Files” file group includes common executable extensions like .exe, .dll, .bat, .cmd, and .scr.
To implement this protection, you install FSRM, create or modify file screens on target folders, and configure them to block the Executable Files group (or create custom groups with specific extensions). When users attempt to save blocked file types, the operation fails with a customizable error message explaining the policy. File screens operate at the file system level and work regardless of how files are accessed—through mapped drives, UNC paths, or applications. This prevents malware distribution, unauthorized software installation, and helps maintain compliance with security policies. You can configure active screens (block files) or passive screens (allow but notify) depending on requirements.
Option B is incorrect because NTFS execute permissions control whether files can be run, not whether they can be saved. Denying execute permission prevents running executables but doesn’t prevent users from copying executable files to the location.
Option C is incorrect because Windows Defender Application Control controls which applications can run on endpoints, not which files can be saved to file servers. WDAC is endpoint security policy, while file screens are server-side storage policy.
Option D is incorrect because share permissions control overall access levels (read, change, full control) to shares but don’t provide granular file type filtering. Share permissions are too coarse-grained to distinguish between executable and non-executable files.
Question 170
You manage a Windows Server 2022 environment with Network Policy Server for VPN authentication. You need to configure NPS to allow VPN connections only during business hours (8 AM to 6 PM, Monday to Friday). What should you configure?
A) Day and time restrictions in network policy constraints
B) Connection request policy conditions
C) RADIUS client time-based settings
D) Network policy conditions
Answer: A
Explanation:
The correct answer is option A. Network policy constraints include day and time restrictions that control when network access is permitted. After a connection request matches network policy conditions and authentication succeeds, the constraints are evaluated to determine whether access should be granted based on the current time.
You configure time restrictions in the network policy’s Constraints tab, where you can specify allowed connection times using a weekly grid or schedule. You mark specific hours and days when connections are permitted, such as Monday through Friday from 8:00 AM to 6:00 PM. When users attempt to connect outside these hours, even with valid credentials, NPS denies access based on the time constraint violation. This time-based access control helps enforce security policies, reduce after-hours unauthorized access risks, and ensure compliance with organizational access policies. Existing connections established during allowed hours are not automatically terminated when the time window closes.
Option B is incorrect because connection request policy conditions determine whether NPS processes a request locally, forwards it to another RADIUS server, or rejects it. Connection request policies don’t evaluate time of day for access decisions; that’s the role of network policy constraints.
Option C is incorrect because RADIUS client settings define the network access servers (VPN servers, switches, access points) that communicate with NPS, not the access times for end users. RADIUS client configuration establishes trust relationships but doesn’t control when users can connect.
Option D is incorrect because network policy conditions determine whether a connection request matches the policy for evaluation, while constraints determine whether matched requests should be allowed. Time restrictions are implemented as constraints, not conditions.
Question 171
You have a Windows Server 2022 server running Hyper-V. You need to configure a virtual machine to use a physical graphics processing unit (GPU) for graphics-intensive applications. What should you implement?
A) Discrete Device Assignment (DDA)
B) RemoteFX vGPU
C) Enhanced Session Mode
D) Virtual machine integration services
Answer: A
Explanation:
The correct answer is option A. Discrete Device Assignment allows you to pass through physical PCIe devices, including GPUs, directly to virtual machines, providing near-native performance for graphics-intensive workloads. DDA dedicates the entire physical device to a single VM, bypassing the hypervisor’s virtualization layer for maximum performance.
To implement DDA, you must have compatible hardware (server with IOMMU/VT-d support, SR-IOV capable devices), disable the device in the host OS, dismount it from the host using PowerShell cmdlets (Dismount-VMHostAssignableDevice), and assign it to the VM (Add-VMAssignableDevice). Once assigned, the VM has direct access to the GPU hardware, enabling GPU-accelerated applications like CAD, video editing, machine learning, or GPU compute workloads. DDA requires Windows Server 2016 or later and generation 2 VMs. The assigned device becomes unavailable to the host and other VMs.
Option B is incorrect because RemoteFX vGPU was deprecated and removed in Windows Server 2020 and later versions. While RemoteFX previously provided GPU virtualization, it’s no longer available in modern Windows Server versions, making DDA the current solution.
Option C is incorrect because Enhanced Session Mode improves the Remote Desktop experience by enabling features like clipboard sharing and local resource redirection, but it doesn’t provide physical GPU access to VMs. Enhanced Session Mode is about connectivity features, not GPU assignment.
Option D is incorrect because integration services provide guest-host communication for features like time synchronization, heartbeat, and data exchange, but don’t enable GPU passthrough. Integration services are general VM functionality enhancements, not hardware assignment mechanisms.
Question 172
You manage a Windows Server 2022 DNS environment with multiple DNS servers. You need to implement a solution that distributes DNS query load across multiple servers based on client subnet location. What should you configure?
A) DNS policies with subnet-based query resolution
B) Round-robin DNS
C) DNS forwarding
D) Secondary zones
Answer: A
Explanation:
The correct answer is option A. DNS policies in Windows Server 2016 and later support subnet-based query resolution, allowing you to configure different DNS responses based on the client’s source subnet. This enables intelligent traffic distribution and load balancing across geographically distributed servers while improving user experience through location-based responses.
You create DNS client subnets representing different geographic regions or network segments, then create zone scopes containing location-specific resource records. DNS query resolution policies evaluate the client’s subnet and return appropriate responses from corresponding zone scopes. For example, clients from the US subnet receive IP addresses of US-based servers, while European clients get European server addresses. This provides geo-proximity routing, reduces latency, and distributes load across regional infrastructure. The solution requires no client-side configuration and operates transparently, making it ideal for load distribution in distributed environments.
Option B is incorrect because round-robin DNS rotates through multiple IP addresses for the same hostname but doesn’t consider client location. All clients receive all IP addresses in rotating order, without intelligence about which server is closest or most appropriate.
Option C is incorrect because DNS forwarding directs queries for specific domains to designated servers but doesn’t distribute query load based on client location. Forwarders are about query routing between DNS servers, not location-based load balancing for clients.
Option D is incorrect because secondary zones provide read-only replicas of primary zones for redundancy and load distribution, but without intelligence about which clients should use which servers. Secondary zones improve availability but don’t provide geo-based intelligent routing.
Question 173
You have a Windows Server 2022 file server with Distributed File System Namespace configured. You need to ensure that users are directed to the closest file server based on their Active Directory site. What should you enable?
A) Site-aware DFS referrals (ordering by site cost)
B) DFS Replication bandwidth throttling
C) Folder target priority override
D) Access-based enumeration
Answer: A
Explanation:
The correct answer is option A. Site-aware DFS referrals automatically direct clients to file servers in their local Active Directory site, minimizing WAN traffic and improving access performance. The DFS Namespace server uses Active Directory site topology and inter-site link costs to determine the optimal server for each client.
This feature is enabled by default for domain-based namespaces. When clients request referrals for DFS folders with multiple targets, the namespace server evaluates the client’s site membership and returns referrals ordered by site proximity—local targets first, then targets in nearby sites based on configured site link costs. Clients connect to the first responding server in the referral list, which is typically a local server. This automatic site awareness optimizes network utilization without client configuration or user intervention. For standalone namespaces or to customize behavior, you can adjust ordering methods in namespace server settings, but domain-based namespaces default to site-aware ordering.
Option B is incorrect because DFS Replication bandwidth throttling controls replication traffic between servers, not client referrals. Throttling manages server-to-server replication bandwidth but doesn’t affect which servers clients are directed to.
Option C is incorrect because folder target priority override allows manual prioritization of specific targets over site-based automatic ordering. Override contradicts site awareness by imposing static preferences rather than dynamic site-based selection.
Option D is incorrect because access-based enumeration controls whether users see folders they don’t have permissions to access, not which servers they’re directed to. ABE is about folder visibility based on permissions, not referral optimization based on location.
Question 174
You manage a Windows Server 2022 environment with Active Directory Certificate Services configured as an enterprise CA. You need to revoke a certificate that was compromised. What should you do?
A) Revoke the certificate in the Certification Authority console and specify the revocation reason
B) Delete the certificate from Active Directory
C) Disable the user account associated with the certificate
D) Modify the certificate template to prevent issuance
Answer: A
Explanation:
The correct answer is option A. To revoke a compromised certificate, you use the Certification Authority console to explicitly revoke it and specify a reason code (such as Key Compromise, CA Compromise, or Cessation of Operation). This adds the certificate to the Certificate Revocation List, notifying clients that the certificate is no longer trustworthy.
In the CA console, you navigate to Issued Certificates, locate the compromised certificate, right-click it, and select “Revoke Certificate.” You choose an appropriate reason code explaining why revocation is necessary—for compromised certificates, “Key Compromise” is appropriate. The certificate’s serial number is added to the CRL during the next publication. Clients checking the CRL or querying OCSP responders will learn the certificate is revoked and reject it. Revocation is irreversible—certificates cannot be un-revoked once revoked for key compromise. This process ensures that compromised certificates cannot be used for authentication or encryption even before their normal expiration date.
Option B is incorrect because certificates aren’t stored in Active Directory in a way that deletion would revoke them. Even if you could delete certificate objects, this wouldn’t update the CRL to notify clients. Revocation must be performed through the CA’s revocation mechanism.
Option C is incorrect because disabling the user account prevents authentication but doesn’t revoke the certificate. Applications checking certificate validity through CRL or OCSP wouldn’t know the certificate is compromised unless formally revoked. Account status and certificate validity are separate.
Option D is incorrect because modifying certificate templates prevents future issuance but doesn’t revoke already-issued certificates. Template changes are prospective, not retroactive. Compromised certificates must be explicitly revoked through the CA console.
Question 175
You have a Windows Server 2022 Hyper-V failover cluster with multiple virtual machines. You need to configure the cluster to prevent a specific node from hosting virtual machines during normal operation while allowing it to host VMs during failover scenarios. What should you configure?
A) Set the node as preferred owner with passive role
B) Configure possible owners list excluding the node
C) Set cluster node drain mode
D) Configure anti-affinity rules
Answer: A
Explanation:
The correct answer is option A. While the terminology isn’t perfect in the option, the concept involves configuring preferred owners for clustered VMs to exclude the specific node from the preferred list while keeping it in the possible owners list. This allows the node to host VMs during failover emergencies but prevents it from hosting them during normal operations or planned migrations.
In Failover Cluster Manager, you configure VM properties to specify preferred owners (nodes that should actively host VMs) and possible owners (nodes that can host VMs during failover). By excluding a node from the preferred owners while keeping it in possible owners, you ensure it’s available for disaster recovery but not used during normal load balancing. This configuration is useful for asymmetric clusters where one node has different capabilities, for licensing scenarios, or when maintaining a designated failover node. The cluster respects these preferences during VM placement, migrations, and balancing operations.
Option B is incorrect because removing a node from possible owners completely prevents it from hosting VMs even during failover, which contradicts the requirement. Possible owners list determines failover eligibility; excluding a node means it can never host those VMs.
Option C is incorrect because drain mode is a temporary administrative state for maintenance, not a permanent configuration. Draining removes workloads temporarily but isn’t designed for permanent preferential placement policies.
Option D is incorrect because anti-affinity rules prevent specific VMs from running together on the same node, not for excluding nodes from general VM hosting. Anti-affinity is about VM-to-VM relationships, not node-to-VM preferences.
Question 176
You manage a Windows Server 2022 DNS server hosting several zones. You need to prevent DNS amplification attacks where your server is used to attack third parties. What should you configure?
A) Response Rate Limiting (RRL)
B) DNSSEC signing
C) Recursive query restrictions
D) Cache locking
Answer: A
Explanation:
The correct answer is option A. Response Rate Limiting is a DNS server feature specifically designed to mitigate DNS amplification attacks by detecting when identical responses are being sent to the same client repeatedly and throttling those responses. RRL prevents attackers from using your DNS server as an amplification vector in DDoS attacks.
DNS amplification attacks exploit open recursive resolvers by sending queries with spoofed source addresses, causing large DNS responses to be sent to attack victims. RRL detects abnormal patterns—many identical responses to the same address—and begins dropping or truncating responses to rate-limit the attack traffic. You configure RRL using Set-DnsServerResponseRateLimiting PowerShell cmdlet, specifying parameters like responses per second, leak rate, and truncate rate. RRL maintains normal service for legitimate clients while drastically reducing amplification attack effectiveness. The feature is available in Windows Server 2016 and later and should be enabled on authoritative DNS servers exposed to the Internet.
Option B is incorrect because DNSSEC signing provides response authenticity and integrity verification but doesn’t prevent amplification attacks. DNSSEC actually increases response sizes, potentially making amplification worse if exploited. DNSSEC addresses different security concerns.
Option C is incorrect because while restricting recursive queries prevents open resolver abuse, amplification attacks can also exploit authoritative responses for zones you host. Recursion restrictions help but don’t fully address amplification when attackers query your authoritative zones.
Option D is incorrect because cache locking prevents cache poisoning by protecting cached records from being overwritten, not preventing amplification attacks. Cache locking addresses cache integrity, while RRL addresses abuse of response traffic for attacks.
Question 177
You have a Windows Server 2022 file server with Storage Spaces Direct configured. You need to add capacity to the storage pool by adding new physical drives. What should you do first?
A) Add physical drives to servers and run Update-StoragePool
B) Run Optimize-StoragePool
C) Create new virtual disks
D) Enable storage maintenance mode
Answer: A
Explanation:
The correct answer is option A. To expand Storage Spaces Direct capacity, you physically install new drives in cluster nodes and then run Update-StoragePool to incorporate them into the storage pool. The pool automatically recognizes and begins using the new capacity for rebalancing and new volume allocation.
After physically installing drives and ensuring they’re recognized by the operating system, you execute Update-StoragePool -FriendlyName “S2D on ClusterName” in PowerShell. This command refreshes the pool’s view of available disks and adds the new drives to the usable capacity. Storage Spaces Direct automatically begins rebalancing data across the expanded drive set to optimize performance and capacity utilization. Once the pool recognizes the new capacity, you can extend existing volumes or create new ones. The process requires no downtime for existing volumes and occurs transparently. Adding drives across multiple nodes simultaneously provides the best balance and performance improvement.
Option B is incorrect because Optimize-StoragePool performs post-addition rebalancing but doesn’t add drives to the pool. Optimization happens after Update-StoragePool incorporates new drives. Running optimize before updating wouldn’t recognize the new capacity.
Option C is incorrect because creating new virtual disks is what you do after expanding pool capacity, not before. You must first add physical drives and update the pool to have additional capacity available for new virtual disk allocation.
Option D is incorrect because storage maintenance mode is used when removing or servicing nodes/drives, not when adding capacity. Adding drives is a non-disruptive operation that doesn’t require maintenance mode. Maintenance mode protects data during potentially disruptive operations.
Question 178
Your organization is implementing a hybrid environment with on-premises Windows Servers and Azure. You need to ensure that users can authenticate seamlessly to both environments without repeatedly entering credentials. Which solution should you implement?
A) Active Directory Federation Services (AD FS)
B) Azure AD Connect Pass-through Authentication
C) Local Administrator Password Solution (LAPS)
D) Windows Hello for Business
Answer: B
Explanation:
When designing a hybrid identity solution, the primary goal is to allow users to authenticate seamlessly across on-premises and cloud resources. Azure AD Connect Pass-through Authentication is a method that ensures users can use the same credentials on both on-premises Active Directory and Azure AD, eliminating the need for multiple logins. It authenticates users directly against the on-premises Active Directory without storing passwords in the cloud, providing a secure and user-friendly authentication mechanism.
Active Directory Federation Services (AD FS) is another approach for single sign-on (SSO), but it requires additional infrastructure, such as federation servers and Web Application Proxies, increasing administrative overhead. Local Administrator Password Solution (LAPS) focuses on managing local administrator passwords for individual machines, which is unrelated to enabling seamless authentication across hybrid environments. Windows Hello for Business is a modern authentication method that replaces passwords with biometrics or PINs on individual devices but does not inherently provide seamless access to cloud applications for all users.
By implementing Azure AD Connect Pass-through Authentication, organizations benefit from centralized identity management, simplified administration, and enhanced security. This method supports multi-factor authentication, conditional access policies, and real-time password validation against on-premises directories. It is particularly advantageous for organizations transitioning to the cloud in stages, as it enables gradual integration without disrupting existing workflows. Additionally, the solution is scalable and can accommodate thousands of users with minimal latency. Understanding and implementing hybrid identity best practices is crucial for the AZ-800 exam, as it demonstrates mastery of core principles in managing Windows Server environments integrated with Microsoft Azure.
Question 179
You need to deploy a highly available file server in your hybrid environment that can handle thousands of concurrent requests and provide disaster recovery capabilities. Which configuration should you use?
A) Standalone File Server on a single VM
B) Clustered File Server with Storage Spaces Direct (S2D)
C) Azure Blob Storage with access via SMB
D) Windows Distributed File System (DFS) without clustering
Answer: B
Explanation:
Deploying a highly available and resilient file server in a hybrid environment requires both redundancy and performance optimization. A Clustered File Server using Storage Spaces Direct (S2D) provides high availability by distributing storage across multiple nodes while presenting a single namespace to clients. Storage Spaces Direct leverages local storage on cluster nodes and combines it into a resilient, highly performant storage pool. This setup ensures that if one node fails, the system continues to serve client requests without interruption, maintaining business continuity.
Standalone File Server on a single VM lacks redundancy, and any failure of the virtual machine or underlying hardware could result in complete downtime. Azure Blob Storage with SMB access can be used for cloud-based file storage, but it is not a direct replacement for a high-performance on-premises file server capable of handling thousands of concurrent requests. DFS without clustering provides logical namespace and replication but does not inherently provide high availability for individual file servers or eliminate single points of failure.
Implementing a clustered file server with S2D also simplifies management and scalability. Administrators can add additional nodes to the cluster to increase capacity and performance dynamically. Additionally, combining clustering with Azure Site Recovery or other hybrid backup strategies enhances disaster recovery, ensuring data integrity in case of regional outages. Knowledge of these configurations demonstrates expertise in designing robust, hybrid storage solutions, which is essential for successfully passing the Microsoft AZ-800 exam.
Question 180
You are managing a hybrid environment where servers run Windows Server 2022 Datacenter. You need to deploy virtual machines on-premises while ensuring centralized management and monitoring through Azure. Which solution best meets these requirements?
A) Hyper-V Manager with manual VM monitoring
B) System Center Virtual Machine Manager (SCVMM) integrated with Azure Arc
C) Windows Admin Center without Azure integration
D) Azure Site Recovery for VM replication only
Answer: B
Explanation:
For hybrid environments that require centralized management of on-premises virtual machines with Azure integration, System Center Virtual Machine Manager (SCVMM) integrated with Azure Arc provides the most comprehensive solution. SCVMM allows administrators to create, deploy, and manage virtual machines on-premises, providing advanced features such as dynamic optimization, live migration, and resource allocation. By integrating SCVMM with Azure Arc, administrators can extend monitoring, compliance, and configuration management capabilities to the cloud, creating a unified hybrid management plane.
Hyper-V Manager with manual VM monitoring is a basic tool suitable for managing individual virtual machines but lacks enterprise-scale management, centralized reporting, and cloud integration. Windows Admin Center without Azure integration offers a modern interface for managing servers and VMs locally, but it does not provide hybrid monitoring, alerting, or governance. Azure Site Recovery is primarily designed for disaster recovery and replication, and while it can protect VMs, it does not provide comprehensive management and monitoring capabilities.
With SCVMM integrated with Azure Arc, organizations gain centralized control, policy enforcement, and visibility across hybrid resources. Administrators can leverage Azure Monitor, Log Analytics, and Azure Security Center for unified telemetry, enabling proactive management of VM performance, health, and security. This solution supports scaling workloads, maintaining compliance, and implementing governance policies across both on-premises and cloud environments. Understanding how to integrate SCVMM with Azure Arc demonstrates mastery of hybrid infrastructure management, which is a critical skill for the Microsoft AZ-800 exam and modern IT administration practices.