Microsoft AZ-800 Administering Windows Server Hybrid Core Infrastructure Exam Dumps and Practice Test Questions Set 3 Q 41-60

Visit here for our full Microsoft AZ-800 exam dumps and practice test questions.

Question 41

You have a server named Server1 that runs Windows Server. Server1 has the Hyper-V role installed and hosts several virtual machines. You need to ensure that VM checkpoints are automatically deleted after 24 hours. What should you configure?

A) Automatic checkpoint settings

B) Checkpoint file location

C) Production checkpoints

D) Standard checkpoints

Answer: A

Explanation:

Automatic checkpoint settings in Hyper-V control the behavior of checkpoints that are created automatically by the system or through scheduled operations. While Hyper-V does not have a built-in feature that automatically deletes checkpoints after a specific time period by default, you can configure automatic checkpoint behavior through PowerShell scripts or third-party management tools that leverage Hyper-V management capabilities. The automatic checkpoint settings determine how checkpoints are created, maintained, and can be configured to implement retention policies.

To implement automatic deletion of checkpoints after 24 hours, you would typically create a scheduled task that runs a PowerShell script to enumerate checkpoints older than 24 hours and delete them using the Remove-VMSnapshot cmdlet. This script would query each virtual machine for its checkpoints, check the creation timestamp, and remove those exceeding the 24-hour threshold. The scheduled task would run periodically, such as hourly or daily, to enforce the retention policy across all virtual machines on Server1.

This approach provides flexibility in managing checkpoint lifecycle and preventing checkpoint accumulation that can consume excessive storage space. Checkpoints, especially standard checkpoints, can grow large over time as they preserve the state of virtual machines at specific points in time. Implementing an automatic deletion policy ensures that temporary checkpoints used for testing or short-term rollback purposes do not persist indefinitely and impact storage capacity. The automatic checkpoint configuration represents the management framework for controlling checkpoint behavior systematically.

Why other options are incorrect: B is incorrect because checkpoint file location specifies where checkpoint files are stored on the host’s storage system, controlling the path where AVHD or AVHDX files are created. This setting affects storage organization but does not control checkpoint retention or automatic deletion policies. C is incorrect because production checkpoints use Volume Shadow Copy Service to create application-consistent checkpoints without capturing memory state, making them suitable for production environments. However, the checkpoint type does not determine whether checkpoints are automatically deleted after a time period. D is incorrect because standard checkpoints capture the complete state of a virtual machine including memory, making them useful for development and test scenarios. Like production checkpoints, the checkpoint type itself does not provide automatic deletion functionality based on age.

Question 42

You have an Azure subscription that contains a virtual network named VNet1. You plan to deploy Azure Bastion to VNet1. You need to prepare VNet1 for Azure Bastion deployment. What should you create in VNet1?

A) A subnet named AzureBastionSubnet

B) A network security group

C) A VPN gateway

D) A NAT gateway

Answer: A

Explanation:

Azure Bastion is a fully managed PaaS service that provides secure RDP and SSH connectivity to virtual machines directly through the Azure portal without exposing public IP addresses on the target VMs. To deploy Azure Bastion in a virtual network, you must create a dedicated subnet specifically named AzureBastionSubnet. This subnet name is mandatory and cannot be changed, as Azure Bastion requires this specific subnet to deploy its infrastructure components that facilitate the secure connection brokering service.

The AzureBastionSubnet must meet specific requirements including a minimum size of /27 or larger to accommodate the Azure Bastion infrastructure. The subnet should not contain any other resources, as it is reserved exclusively for Azure Bastion deployment. When you create this subnet in VNet1, you define the address space that Azure Bastion will use for its internal operations. The subnet must be created before attempting to deploy an Azure Bastion resource, as the deployment process validates the presence of this specifically named subnet.

After creating the AzureBastionSubnet, you can deploy the Azure Bastion resource, which provisions the necessary infrastructure within this subnet. The Bastion service then provides connectivity to virtual machines in VNet1 and any peered virtual networks, eliminating the need for jump boxes or exposing management ports to the internet. The dedicated subnet ensures isolation of the Bastion infrastructure from other network resources while providing the necessary address space for the service to operate.

Why other options are incorrect: B is incorrect because while network security groups can be associated with the AzureBastionSubnet to control traffic flow, creating an NSG is not a prerequisite for deploying Azure Bastion. The subnet itself must exist first, and NSG configuration is optional for basic Bastion functionality. C is incorrect because a VPN gateway provides site-to-site or point-to-site VPN connectivity and is a completely separate service from Azure Bastion. VPN gateway is not required for Azure Bastion deployment and serves different connectivity purposes. D is incorrect because a NAT gateway provides outbound internet connectivity for resources in a subnet, but it is not required for Azure Bastion deployment. Azure Bastion manages its own connectivity requirements and does not depend on NAT gateway infrastructure.

Question 43

You have a server named Server1 that runs Windows Server and has the DHCP Server role installed. You configure a DHCP scope with an address range of 192.168.1.100 to 192.168.1.200. You need to ensure that the addresses from 192.168.1.150 to 192.168.1.160 are never assigned to DHCP clients. What should you create?

A) An exclusion range

B) A reservation

C) A scope option

D) A filter

Answer: A

Explanation:

An exclusion range in DHCP configuration allows you to specify a contiguous range of IP addresses within a scope that should never be assigned to DHCP clients. When you create an exclusion range for addresses 192.168.1.150 to 192.168.1.160, the DHCP server removes these addresses from its available pool and will never lease them to any client, regardless of whether the client requests a specific address or accepts any available address. Exclusion ranges are commonly used to reserve blocks of addresses for servers, network devices, or other infrastructure that requires static IP addressing.

Creating an exclusion range is straightforward through the DHCP management console or PowerShell. You specify the starting and ending IP addresses of the range you want to exclude, and the DHCP server immediately removes these addresses from the allocation pool. Multiple exclusion ranges can be created within a single scope, allowing you to exclude non-contiguous blocks of addresses as needed. Excluded addresses can still be manually assigned as static IP addresses on devices, and the DHCP server will not conflict with these assignments since it never attempts to lease excluded addresses.

Exclusion ranges are more efficient than creating individual reservations when you need to prevent assignment of multiple consecutive addresses. While reservations tie specific addresses to particular clients based on MAC addresses or client identifiers, exclusions simply remove addresses from the available pool without associating them with any specific client. This makes exclusions the appropriate choice when you want to reserve a block of addresses for manual assignment or for devices that will not use DHCP at all.

Why other options are incorrect: B is incorrect because reservations associate specific IP addresses with specific clients based on their MAC addresses or DHCP unique identifiers. While you could theoretically create 11 separate reservations for non-existent clients, this would be inefficient and poor practice. Reservations are intended for ensuring specific clients always receive the same address, not for removing addresses from the pool. C is incorrect because scope options configure parameters like default gateway, DNS servers, and domain name that are distributed to DHCP clients along with their IP address assignments. Scope options do not control which addresses are available for assignment. D is incorrect because DHCP filters use MAC addresses to allow or deny DHCP service to specific clients, controlling which clients can receive addresses from the server. Filters do not remove specific IP addresses from the available pool.

Question 44

You have an on-premises Active Directory Domain Services domain. You have an Azure subscription that contains an Azure SQL database named SQL1. You implement Azure AD Connect to synchronize on-premises identities to Azure AD. You need to configure SQL1 to support authentication using on-premises user accounts. What should you configure on SQL1?

A) Azure Active Directory authentication

B) SQL Server authentication

C) Windows authentication

D) Certificate-based authentication

Answer: A

Explanation:

Azure SQL Database supports authentication through multiple mechanisms, and when you have synchronized on-premises Active Directory identities to Azure Active Directory using Azure AD Connect, you can enable Azure Active Directory authentication on SQL1 to allow on-premises users to authenticate to the database using their synchronized identities. Azure AD authentication for Azure SQL Database creates a bridge between the on-premises identity infrastructure and the cloud database service, enabling seamless authentication for hybrid identity scenarios.

Configuring Azure AD authentication on SQL1 involves setting an Azure AD administrator for the SQL server, which can be an Azure AD user or group. Once configured, users whose accounts have been synchronized from on-premises AD to Azure AD can connect to SQL1 using their Azure AD credentials, which correspond to their on-premises domain accounts. The authentication flow leverages Azure AD as the identity provider, validating credentials and issuing tokens that SQL1 accepts for database access. This approach provides centralized identity management and eliminates the need to maintain separate SQL authentication credentials.

Azure AD authentication for Azure SQL Database supports multiple authentication methods including password authentication, integrated authentication for domain-joined machines, and multi-factor authentication for enhanced security. When users connect to SQL1, they specify their Azure AD identity, and the authentication process validates their credentials against Azure AD, which maintains synchronization with the on-premises directory through Azure AD Connect. This configuration enables single sign-on experiences and consistent security policies across on-premises and cloud resources.

Question 45. You have a server named Server1 that runs Windows Server. You plan to use Storage Migration Service to migrate data from Server1 to an Azure file share. You need to install the required components for Storage Migration Service. What should you install?

A) Storage Migration Service orchestrator and proxy

B) Azure File Sync agent

C) Data Migration Assistant

D) Azure Migrate appliance

Answer: A

Explanation:

Storage Migration Service is a Windows Server feature designed to simplify the migration of data from older servers to newer servers or to Azure. The service consists of two main components: the orchestrator and the proxy. The orchestrator is the central management component that coordinates the entire migration process, running on a Windows Server that manages the migration jobs, inventory, transfer, and cutover phases. The proxy component is installed on servers that facilitate the data transfer, particularly useful when the orchestrator cannot directly reach the source or destination servers.

To migrate data from Server1 to an Azure file share using Storage Migration Service, you first install the Storage Migration Service orchestrator on a management server, which can be Server1 itself or a separate Windows Server. The orchestrator provides the graphical interface and PowerShell cmdlets for managing migration projects. If network topology requires it, you also deploy proxy servers that act as intermediaries for data transfer. The orchestrator discovers the source servers, inventories their data, transfers files and shares, and can cut over the identity of the source server to the destination.

For migrations to Azure file shares specifically, Storage Migration Service can transfer data directly to Azure Files while preserving permissions, timestamps, and other metadata. The orchestrator handles authentication to Azure, manages the transfer process to optimize throughput, and provides progress reporting. After installation, you use the Storage Migration Service interface in Windows Admin Center or the dedicated MMC snap-in to create and manage migration jobs that move data from on-premises servers to Azure file shares.

Question 46

You have a Windows Server failover cluster that hosts several highly available virtual machines. You need to configure the cluster to use a cloud witness in Azure. What information do you need from Azure to configure the cloud witness?

A) Storage account name and access key

B) Virtual machine name and password

C) Virtual network name and resource group

D) Subscription ID and tenant ID

Answer: A

Explanation:

Cloud Witness is a quorum witness type for Windows Server Failover Clusters that uses Microsoft Azure Blob Storage as the arbitration point. To configure Cloud Witness, you need two pieces of information from Azure: the storage account name and a storage account access key. These credentials allow the failover cluster to authenticate to Azure and write the witness data to blob storage. The storage account serves as the neutral location where the cluster stores a small blob file used for quorum arbitration during split-brain scenarios or cluster membership decisions.

The configuration process involves creating a general-purpose storage account in Azure if you don’t already have one suitable for witness purposes. From the storage account settings in the Azure portal, you retrieve the storage account name and either the primary or secondary access key. These credentials are then entered into the cluster quorum configuration using Failover Cluster Manager or PowerShell. The cluster uses these credentials to create and maintain a blob container and witness blob in the storage account, writing updates whenever cluster membership or state changes occur.

Cloud Witness offers several advantages over traditional file share witness or disk witness options, including elimination of infrastructure dependencies, automatic Azure-provided redundancy, and suitability for multi-site clusters. The storage account should be in a different Azure region from your primary infrastructure to ensure maximum resiliency. Azure storage accounts provide built-in replication and high availability, making them reliable arbitration points for cluster quorum. The access key should be treated as sensitive information and can be rotated periodically by updating both the Azure storage account and the cluster configuration.

Why other options are incorrect: B is incorrect because Cloud Witness does not use Azure virtual machines as the witness infrastructure. The witness functionality is provided by Azure Blob Storage, not compute resources, so VM credentials are not relevant to configuring Cloud Witness. C is incorrect because while the storage account exists in a virtual network context and resource group in Azure, these organizational elements are not the credentials needed to configure Cloud Witness. The cluster needs authentication credentials specifically for the storage account itself. D is incorrect because subscription ID and tenant ID are Azure management and identity metadata, but they are not the authentication credentials that the cluster uses to access the storage account. The storage account name and access key provide the specific credentials for blob storage access.

Question 47

You have a server named Server1 that runs Windows Server and has the DNS Server role installed. You need to configure DNS to support DNSSEC for a zone named contoso.com. What should you do first?

A) Sign the zone

B) Configure zone replication

C) Create a conditional forwarder

D) Enable DNS cache locking

Answer: A

Explanation:

DNSSEC (Domain Name System Security Extensions) provides authentication and integrity protection for DNS data by using digital signatures to verify that DNS responses have not been tampered with during transit. To implement DNSSEC for a DNS zone, the first step is to sign the zone, which generates cryptographic signatures for all resource records in the zone and creates the necessary DNSSEC-specific records including RRSIG, DNSKEY, DS, and NSEC or NSEC3 records. Zone signing is the foundational operation that enables DNSSEC protection.

When you sign the contoso.com zone on Server1, the DNS server uses cryptographic keys to create digital signatures for each resource record set in the zone. This process can be initiated through the DNS Manager console by right-clicking the zone and selecting DNSSEC and then Sign the Zone, or through PowerShell using appropriate DNSSEC cmdlets. The signing process generates a Key Signing Key (KSK) and a Zone Signing Key (ZSK), with the KSK signing the DNSKEY records and the ZSK signing all other records in the zone. These keys can be stored in software or in hardware security modules for enhanced security.

After signing the zone, DNSSEC-aware resolvers can validate the signatures on DNS responses, ensuring the data came from an authoritative source and has not been modified. The zone must be re-signed periodically as signatures have expiration dates, though Windows Server can automate this re-signing process. For full DNSSEC validation chain, you also need to publish DS records in the parent zone and configure trust anchors, but signing the zone is always the essential first step in DNSSEC implementation.

Question 48

You have an Azure subscription that contains a virtual machine named VM1 running Windows Server. You plan to monitor VM1 using Azure Monitor. You need to collect performance counters and event logs from VM1. What should you deploy to VM1?

A) Azure Monitor agent

B) Azure Backup agent

C) Azure Site Recovery agent

D) Network Watcher agent

Answer: A

Explanation:

Azure Monitor agent is the modern data collection agent for Azure Monitor that replaces the legacy Log Analytics agent and Diagnostics extension. To collect performance counters, event logs, and other monitoring data from VM1, you need to deploy the Azure Monitor agent to the virtual machine. This agent provides unified data collection capabilities and supports various data sources including Windows Event Logs, performance counters, IIS logs, custom logs, and more. The agent sends collected data to Azure Monitor where it can be analyzed, visualized, and used for alerting.

The Azure Monitor agent uses data collection rules to define what data should be collected from the virtual machine and where it should be sent. After deploying the agent to VM1, you create and associate data collection rules that specify performance counters like CPU usage, memory utilization, and disk I/O, as well as event log categories such as System, Application, and Security logs. The agent authenticates to Azure using managed identity or service principal and transmits data securely to Log Analytics workspaces or other Azure Monitor destinations.

Azure Monitor agent offers several improvements over legacy agents including support for multiple workspaces, network isolation capabilities, enhanced security through managed identity authentication, and more efficient data collection through data collection rules. The agent runs as a service on VM1 and continuously collects configured metrics and logs, enabling comprehensive monitoring and troubleshooting capabilities. Azure Monitor provides extensive query capabilities through Kusto Query Language for analyzing the collected data and creating custom dashboards and alerts.

Question 49

You have a server named Server1 that runs Windows Server and has the File Server role installed. You need to enable SMB compression to reduce bandwidth consumption when transferring files over the network. What should you do?

A) Enable SMB compression using Set-SmbServerConfiguration cmdlet

B) Enable BitLocker on the file server volumes

C) Configure BranchCache in distributed cache mode

D) Enable Data Deduplication on the volumes

Answer: A

Explanation:

SMB compression is a feature introduced in Windows Server that compresses data during SMB file transfers, reducing network bandwidth consumption at the cost of increased CPU utilization. This feature is particularly beneficial when transferring files over slow or constrained network connections, such as WAN links or VPN connections. To enable SMB compression on Server1, you use the Set-SmbServerConfiguration PowerShell cmdlet with appropriate parameters that control compression behavior for the SMB server.

SMB compression can be configured at multiple levels including per-server defaults and per-share settings. Using Set-SmbServerConfiguration with the RequestCompression parameter, you can configure whether the server requests compression for outbound transfers. Additionally, clients can request compression when connecting to shares. The compression is negotiated on a per-connection basis, and both client and server must support SMB 3.1.1 or later for compression to function. The compression algorithm automatically adapts based on file types, skipping compression for already-compressed files to avoid wasting CPU cycles.

When SMB compression is enabled, file transfers automatically use compression when appropriate, potentially reducing bandwidth consumption by 40-60% depending on file compressibility. The feature is transparent to applications and users, as the compression and decompression occur at the SMB protocol layer. Administrators can monitor the effectiveness of compression through performance counters and should consider the CPU trade-off when enabling compression on heavily utilized file servers.

Question 50. You have an Azure subscription and an on-premises network. You plan to implement a hybrid DNS solution. You need to ensure that Azure virtual machines can resolve on-premises DNS names and on-premises computers can resolve Azure DNS names. What should you configure in Azure?

A) DNS forwarders in Azure DNS private zones

B) Traffic Manager profiles

C) Azure Private Link

D) Azure Virtual WAN

Answer: A

Explanation:

A hybrid DNS solution requires bidirectional name resolution between Azure and on-premises environments, enabling resources in each location to resolve names in the other. To accomplish this, you need to configure DNS forwarders appropriately in both environments. In Azure, when using Azure DNS private zones, you can configure conditional forwarding or deploy DNS servers in Azure that forward queries for on-premises domains to your on-premises DNS servers. Simultaneously, on-premises DNS servers must be configured to forward queries for Azure-hosted domains to Azure DNS or to DNS servers deployed in Azure.

For Azure virtual machines to resolve on-premises DNS names, you typically deploy DNS servers in Azure virtual machines that act as conditional forwarders, forwarding queries for on-premises domain names to the on-premises DNS infrastructure over VPN or ExpressRoute connections. These Azure-hosted DNS servers can also host Azure DNS private zones or forward Azure DNS queries to the Azure-provided DNS service at 168.63.129.16. Virtual machines are then configured to use these Azure-hosted DNS servers as their primary DNS servers, enabling resolution of both Azure and on-premises names.

For on-premises computers to resolve Azure DNS names, particularly names in Azure DNS private zones, you configure on-premises DNS servers with conditional forwarders pointing to the DNS servers deployed in Azure. These on-premises forwarders send queries for Azure-specific domain names to the Azure DNS infrastructure, which can resolve names from private zones linked to your virtual networks. This bidirectional forwarder configuration creates a complete hybrid DNS solution where name resolution works seamlessly across the hybrid environment.

Question 51

You have a server named Server1 that runs Windows Server and has the Hyper-V role installed. You create a virtual machine named VM1. You need to configure VM1 to support Discrete Device Assignment (DDA) to pass through a physical GPU to VM1. What should you do on Server1?

A) Dismount the GPU from the host and assign it to VM1

B) Enable Enhanced Session Mode

C) Configure RemoteFX vGPU

D) Install the GPU driver in VM1

Answer: A

Explanation:

Discrete Device Assignment is a Hyper-V feature that allows physical PCIe devices, such as GPUs, NVMe storage controllers, or network adapters, to be passed through directly to a virtual machine, providing near-native performance. To implement DDA for a GPU, you must first dismount or unbind the device from the host operating system, making it unavailable to Server1 and preparing it for exclusive assignment to VM1. This process involves using PowerShell cmdlets to disable the device from the host’s perspective and then assign it to the virtual machine’s configuration.

The dismounting process uses the Dismount-VMHostAssignableDevice cmdlet to remove the GPU from host control. Before dismounting, you must identify the device’s location path using Get-PnpDevice or similar commands to locate the specific GPU. After dismounting, the device is no longer visible or usable by Server1’s operating system. You then use Add-VMAssignableDevice to assign the GPU to VM1, creating a direct hardware connection between the physical device and the virtual machine. The VM must be stopped during this configuration process, and after starting VM1, the guest operating system can detect and use the GPU as if it were physically installed.

DDA provides significant performance benefits for workloads requiring dedicated GPU resources, such as machine learning, CAD applications, or graphics rendering. The GPU operates at nearly native performance since it bypasses the hypervisor’s virtualization layer for device access. However, DDA has requirements including IOMMU support in the host hardware, specific configurations for CPU and memory, and compatibility considerations for the devices being passed through. Once configured, VM1 has exclusive access to the GPU, and it cannot be shared with other VMs or the host simultaneously.

Why other options are incorrect: B is incorrect because Enhanced Session Mode improves the remote desktop connection experience to virtual machines by enabling features like clipboard sharing, audio redirection, and drive redirection. While it enhances usability, Enhanced Session Mode does not provide GPU passthrough or enable DDA functionality. C is incorrect because RemoteFX vGPU is a deprecated virtualization technology that allowed sharing of GPU resources across multiple virtual machines. RemoteFX provided virtualized GPU capabilities but has been removed from Windows Server and does not offer the dedicated performance of DDA. D is incorrect because installing GPU drivers in VM1 is a step that occurs after the GPU has been assigned through DDA, not before. The driver installation happens within the guest operating system after the physical device has been dismounted from the host and assigned to the VM. This is a subsequent configuration step, not the action required to configure DDA on Server1.

Question 52

You have an Azure subscription that contains a Log Analytics workspace named Workspace1. You have several on-premises servers running Windows Server that you want to monitor using Azure Monitor. You need to configure the servers to send data to Workspace1. What should you install on the on-premises servers?

A) Azure Monitor agent

B) Azure Arc agent

C) System Center Operations Manager agent

D) Windows Admin Center

Answer: A

Explanation:

Azure Monitor agent is the current recommended agent for collecting monitoring data from both Azure and on-premises Windows and Linux servers and sending that data to Azure Monitor Log Analytics workspaces. To configure on-premises servers to send monitoring data to Workspace1, you need to install the Azure Monitor agent on each server. This agent replaces the legacy Log Analytics agent and provides improved capabilities including support for multiple workspaces, enhanced security through managed identity, and more flexible data collection through data collection rules.

The installation process involves downloading the Azure Monitor agent installer for Windows and deploying it to your on-premises servers. After installation, you configure the agent to communicate with Azure by creating data collection rules in Azure Monitor that specify what data to collect and which Log Analytics workspace should receive the data. The agent authenticates to Azure and establishes secure connections to transmit logs, performance counters, and other monitoring data to Workspace1. For on-premises servers, you can use various authentication methods including service principals or Azure Arc integration for identity management.

Azure Monitor agent supports comprehensive data collection including Windows Event Logs, performance counters, IIS logs, custom text logs, and more. The agent efficiently collects and buffers data locally before transmitting to Azure, handling network interruptions gracefully. Data collection rules provide centralized management of what data is collected from which servers, making it easier to manage monitoring configuration across large server fleets. The collected data in Workspace1 can then be queried, visualized, and used for alerting through Azure Monitor capabilities.

Question 53

You have a server named Server1 that runs Windows Server. Server1 hosts a website using Internet Information Services (IIS). You need to configure the website to use Server Name Indication (SNI) to support multiple SSL certificates on a single IP address. What should you configure in IIS?

A) HTTPS binding with Require Server Name Indication enabled

B) Application pool identity

C) Request filtering

D) Authentication methods

Answer: A

Explanation:

Server Name Indication is an extension to the TLS protocol that allows a server to host multiple SSL certificates on a single IP address and port combination. Without SNI, each SSL certificate would require its own unique IP address, which becomes resource-intensive when hosting many HTTPS websites. When configuring an HTTPS binding in IIS with SNI enabled, the web server can inspect the hostname that the client requests during the TLS handshake and present the appropriate SSL certificate for that specific hostname.

To configure SNI in IIS, you create or modify HTTPS bindings for your websites, selecting the option “Require Server Name Indication” in the binding configuration. When you enable this option, you also specify the hostname that the binding applies to and select the appropriate SSL certificate for that hostname. Multiple HTTPS bindings can then share the same IP address and port 443, with each binding associated with a different hostname and certificate. When clients connect, they specify the target hostname in the SNI extension of the TLS handshake, allowing IIS to select and present the correct certificate.

SNI is supported by all modern browsers and clients, though very old clients may not support it. When Require Server Name Indication is enabled on a binding, only clients that send SNI information can successfully establish HTTPS connections to that specific binding. This allows you to efficiently host multiple HTTPS websites on a single server without requiring multiple IP addresses, significantly simplifying SSL certificate management for environments hosting many secure websites.

Question 54

You have a Windows Server failover cluster named Cluster1. You need to upgrade the cluster functional level to enable new features available in the current version of Windows Server. What should you do first?

A) Verify all nodes are running the same version of Windows Server

B) Pause all cluster nodes

C) Update the cluster network settings

D) Drain roles from all nodes

Answer: A

Explanation:

The cluster functional level determines which features are available to the failover cluster and must match the lowest version of Windows Server running on any cluster node. Before you can upgrade the cluster functional level to enable new features, you must first verify that all cluster nodes are running the same version of Windows Server. If any node is running an older version, the cluster functional level cannot be upgraded beyond the capabilities of that older version, as doing so would create incompatibilities and potentially cause cluster failures.

The cluster functional level upgrade process is typically performed during a rolling upgrade scenario where you update cluster nodes one at a time from an older version of Windows Server to a newer version. During this process, the cluster operates in a mixed-mode state where nodes run different Windows Server versions, and the functional level remains at the older version to maintain compatibility. Only after all nodes have been upgraded to the newer Windows Server version can you safely upgrade the cluster functional level. Attempting to upgrade the functional level while nodes still run different versions would fail or cause operational issues.

To verify that all nodes are at the same version, you can use Failover Cluster Manager to examine the properties of each node, checking the operating system version information. Alternatively, PowerShell cmdlets like Get-ClusterNode can retrieve version information for all nodes programmatically. Once verification confirms all nodes run the same current version of Windows Server, you can proceed with upgrading the cluster functional level using the Update-ClusterFunctionalLevel cmdlet or through the Failover Cluster Manager interface. This upgrade is irreversible, so verification is critical.

Why other options are incorrect: B is incorrect because pausing cluster nodes prevents them from hosting new clustered roles but does not affect the ability to upgrade the cluster functional level. Pausing is used for maintenance operations that require preventing role placement, but functional level upgrades do not require paused nodes. Verification of node versions is the prerequisite step. C is incorrect because updating cluster network settings configures network communication preferences and priorities but has no relationship to cluster functional level upgrades. Network settings can be modified at any time and are not prerequisites for functional level changes. D is incorrect because draining roles moves clustered workloads off nodes, preparing them for maintenance or shutdown. While draining might be part of a node upgrade process, it is not required before upgrading the cluster functional level. The functional level can be upgraded while roles continue running, as long as all nodes are already at the target Windows Server version.

Question 55

You have an Azure subscription that contains a storage account named storage1. You need to configure storage1 to support Azure File Share backups using Azure Backup. What should you create in storage1?

A) A Recovery Services vault in the same region as storage

B) A backup policy

C) A file share

D) A snapshot schedule

Answer: C

Explanation:

Azure Backup for Azure Files provides native backup capabilities for Azure file shares, protecting data through snapshot-based backups that are stored within the Azure Files service itself. Before you can configure backup for a file share, the file share must exist within the storage account. The file share is the resource that will be backed up, so creating the file share in storage1 is the prerequisite step before you can configure any backup protection for it.

Azure Files backup uses share snapshots, which are read-only point-in-time copies of file shares. When you configure backup for a file share, Azure Backup creates and manages these snapshots according to the backup policy you define. The snapshots are stored in the same storage account as the file share itself, providing efficient incremental backups with minimal storage overhead since only changed blocks are stored. This architecture makes backup and recovery operations fast and cost-effective.

After creating the file share in storage1, you then configure backup protection by associating the file share with a Recovery Services vault and applying a backup policy. The Recovery Services vault manages the backup operations and retention policies, but the actual snapshot data remains in the storage account. The file share must exist before you can select it as a backup target in the vault configuration. Once backup is enabled, Azure Backup automatically creates snapshots according to the schedule defined in the backup policy.

Why other options are incorrect: A is incorrect because while a Recovery Services vault is indeed required to manage Azure Files backups and should be in the same region as the storage account for optimal performance, the vault is created in the Azure subscription at a resource group level, not within the storage account itself. The question asks what should be created in storage1 specifically, and file shares are created within storage accounts while vaults are separate Azure resources. B is incorrect because backup policies are created and managed within the Recovery Services vault, not within the storage account. Policies define backup schedules and retention settings but are vault-level configurations. The file share must exist in the storage account before policies can be applied to it. D is incorrect because snapshot schedules are managed through backup policies in the Recovery Services vault, not as standalone objects created in the storage account. While snapshots are stored with the file share, the scheduling mechanism is part of the Azure Backup service configuration, not a direct storage account resource.

Question 56

You have a server named Server1 that runs Windows Server. Server1 is configured as a DNS server. You need to configure Server1 to prevent DNS amplification attacks. What should you configure on Server1?

A) Response Rate Limiting (RRL)

B) Cache locking

C) Socket pool size

D) Recursion settings

Answer: A

Explanation:

DNS amplification attacks exploit DNS servers to overwhelm target systems with large volumes of DNS response traffic. Attackers send DNS queries with spoofed source addresses to DNS servers, causing the servers to send large responses to the victim’s IP address. Response Rate Limiting is a DNS security feature specifically designed to mitigate DNS amplification attacks by limiting the rate at which the DNS server responds to queries that could be used for amplification attacks. RRL helps prevent the DNS server from being used as an amplification vector while maintaining normal operation for legitimate queries.

Response Rate Limiting works by identifying patterns of queries that indicate potential abuse, such as multiple queries for the same record from the same subnet or unusual query patterns. When RRL detects potentially malicious query patterns, it limits the rate of responses to those queries, either by reducing the response rate, returning truncated responses that force clients to retry over TCP, or temporarily dropping responses entirely. This prevents attackers from leveraging the DNS server to generate massive amounts of response traffic toward victims while minimally impacting legitimate users.

Configuring RRL on Server1 involves using PowerShell cmdlets or direct configuration of DNS server parameters to set thresholds for response rates, define what constitutes suspicious query patterns, and specify actions to take when limits are exceeded. The configuration is tunable to balance security with performance, allowing legitimate high-volume DNS users while blocking abuse. RRL is particularly important for publicly accessible DNS servers that could be exploited for amplification attacks, and it represents a critical security control for DNS infrastructure exposed to the internet.

Question 57

You have an Azure subscription and an on-premises network connected via ExpressRoute. You have an on-premises Active Directory domain. You plan to deploy Azure Virtual Desktop with FSLogix profile containers. You need to store FSLogix user profiles. The solution must minimize latency for users. Where should you store the FSLogix profiles?

A) Azure Files with private endpoint

B) Azure Blob Storage

C) On-premises file server

D) Azure Managed Disks

Answer: A

Explanation:

FSLogix profile containers provide a superior user profile experience for Azure Virtual Desktop by storing user profiles in virtual hard disk files on network storage. For optimal performance, these profiles should be stored in Azure Files using a private endpoint connection, which provides low-latency access from Azure Virtual Desktop session hosts while maintaining security through private network connectivity. Azure Files offers SMB protocol support required by FSLogix, excellent performance characteristics, and integration with Azure Active Directory Domain Services or on-premises Active Directory for authentication.

Azure Files with private endpoints creates a dedicated network interface in your Azure virtual network, allowing session hosts to access the file share over the Azure backbone network rather than through public internet endpoints. This configuration significantly reduces latency compared to accessing storage over public endpoints or ExpressRoute connections back to on-premises storage. The private endpoint ensures traffic between session hosts and profile storage never leaves the Microsoft network, providing both performance and security benefits. Azure Files Premium tier offers even better performance with low latency and high IOPS suitable for demanding profile workloads.

The storage account hosting Azure Files should be in the same Azure region as the Azure Virtual Desktop session hosts to minimize latency. Domain joining the storage account or configuring Azure AD Kerberos authentication enables seamless access for users without requiring stored credentials. Azure Files handles the complexities of SMB protocol, high availability, and backup, allowing administrators to focus on Azure Virtual Desktop management rather than storage infrastructure maintenance. This approach provides the best balance of performance, security, and manageability for FSLogix profile storage.

Question 58

You have a server named Server1 that runs Windows Server and has the Hyper-V role installed. You need to configure Server1 to replicate a virtual machine named VM1 to a server named Server2 for disaster recovery purposes. What should you enable on both Server1 and Server2?

A) Hyper-V Replica

B) Live Migration

C) Storage Migration

D) Enhanced Session Mode

Answer: A

Explanation:

Hyper-V Replica is a disaster recovery feature built into Hyper-V that provides asynchronous replication of virtual machines from one Hyper-V host to another, enabling business continuity in the event of site failures or disasters. To replicate VM1 from Server1 to Server2, you must enable Hyper-V Replica on both servers, configuring Server1 as the primary server hosting the source VM and Server2 as the replica server receiving the replicated data. Hyper-V Replica creates and maintains a copy of VM1 on Server2 that can be activated for failover if Server1 becomes unavailable.

Enabling Hyper-V Replica involves configuring replication settings in Hyper-V Manager or through PowerShell on both servers. On Server2, you configure the server to act as a replica server, specifying authentication methods such as Kerberos or certificate-based authentication, and defining which servers are authorized to replicate to it. On Server1, you enable replication for VM1, specifying Server2 as the replica server, choosing replication frequency, selecting which recovery points to maintain, and configuring optional compression and encryption for replication traffic. After initial replication completes, Hyper-V Replica tracks changes to VM1 and replicates them to Server2 at the configured interval.

Hyper-V Replica provides flexible recovery point objectives with replication frequencies ranging from 30 seconds to 15 minutes, and maintains multiple recovery points allowing recovery to different points in time. The replica VM remains offline on Server2 during normal operations, consuming minimal resources until failover is initiated. Planned failover, unplanned failover, and test failover options provide various scenarios for disaster recovery testing and actual recovery operations. Hyper-V Replica operates independently of storage systems and works across different storage types, providing disaster recovery capabilities without requiring expensive storage replication solutions.

Question 59

You have an Azure subscription that contains an Azure SQL database named SQL1. You need to configure SQL1 to send diagnostic logs to a Log Analytics workspace for monitoring and analysis. What should you configure on SQL1?

A) Diagnostic settings

B) Firewall rules

C) Geo-replication

D) Automatic tuning

Answer: A

Explanation:

Diagnostic settings in Azure SQL Database control the collection and routing of diagnostic logs and metrics to various destinations including Log Analytics workspaces, storage accounts, event hubs, and partner solutions. To send diagnostic logs from SQL1 to a Log Analytics workspace for monitoring and analysis, you must configure diagnostic settings on the SQL database resource. Diagnostic settings specify which log categories and metrics to collect and where to send them, enabling comprehensive monitoring and troubleshooting capabilities through Azure Monitor.

Configuring diagnostic settings involves accessing SQL1 in the Azure portal, navigating to the Diagnostic settings section, and creating a new diagnostic setting. You select the log categories you want to collect, such as SQLInsights, AutomaticTuning, QueryStoreRuntimeStatistics, Errors, DatabaseWaitStatistics, Timeouts, Blocks, and Deadlocks. You also select relevant metrics like Basic metrics or InstanceAndAppAdvanced metrics. After selecting the desired logs and metrics, you specify the destination as a Log Analytics workspace and select the specific workspace where the data should be sent.

Once diagnostic settings are configured, Azure SQL Database begins sending the selected logs and metrics to the specified Log Analytics workspace. This data can then be queried using Kusto Query Language, visualized in workbooks, and used for creating alerts. Diagnostic logs provide deep insights into database performance, query execution patterns, errors, and operational health. The logs in Log Analytics enable correlation with other Azure resources, historical analysis, and integration with broader monitoring and SIEM solutions. Diagnostic settings are essential for operational visibility and proactive database management.

Question 60

You have a server named Server1 that runs Windows Server. Server1 has the DHCP Server role installed. A DHCP scope has been configured with an address range of 192.168.10.1 to 192.168.10.254. You need to configure the scope to provide clients with the IP address of a WINS server. What should you configure?

A) Scope options

B) Policies

C) Filters

D) Reservations

Answer: A

Explanation:

DHCP scope options provide configuration parameters to DHCP clients along with their IP address assignments. These options include settings such as default gateway, DNS servers, domain name, WINS servers, and numerous other TCP/IP configuration parameters defined in RFC standards. To configure the DHCP scope to provide clients with the IP address of a WINS server, you need to configure scope options, specifically DHCP option 044 for WINS/NBNS servers. Scope options apply to all clients receiving addresses from that particular scope, providing consistent configuration across the network segment.

Configuring scope options involves opening the DHCP management console, navigating to the scope’s Scope Options folder, and adding or modifying option 044 (WINS/NBNS Servers). You specify the IP address or addresses of the WINS servers that clients should use for NetBIOS name resolution. When DHCP clients receive their IP address configuration from this scope, they also receive the WINS server configuration and automatically configure their network settings to use the specified WINS servers for NetBIOS name registration and resolution. This automation eliminates the need for manual configuration on each client.

Scope options can be configured at multiple levels in DHCP including server options that apply to all scopes, scope-level options that apply to a specific scope, and reservation options that apply to specific reserved addresses. The hierarchy determines which option value takes precedence, with reservation options overriding scope options, and scope options overriding server options. For providing WINS server configuration to all clients in a specific scope, configuring the option at the scope level is appropriate. This ensures consistent WINS configuration for the network segment served by that scope.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!