Microsoft AZ-204 Developing Solutions for Azure Exam Dumps and Practice Test Questions Set9 Q161-180

Visit here for our full Microsoft AZ-204 exam dumps and practice test questions.

Questions 161

You are developing an Azure Function that processes messages from an Azure Service Bus queue. The function must process messages in order and ensure that no message is processed more than once. What should you configure?

A) Set the session-enabled property to true on the queue

B) Enable duplicate detection on the queue

C) Set the max delivery count to 1

D) Enable auto-complete in the function binding

Answer: A

Explanation:

To process messages in order and ensure exactly-once processing in Azure Service Bus, you need to enable sessions. When you set the session-enabled property to true on the queue, it ensures that messages with the same session ID are processed sequentially by a single receiver. This guarantees ordered message processing and prevents concurrent processing of messages within the same session.

Sessions work by grouping related messages together using a session identifier. When a receiver locks a session, it has exclusive access to all messages in that session, ensuring that messages are processed in the order they were sent. This is essential for scenarios where business logic depends on message sequence, such as order processing or transaction workflows.

Option B (duplicate detection) helps prevent duplicate messages from being added to the queue but doesn’t guarantee ordered processing. Duplicate detection uses a detection window to identify and reject messages with duplicate MessageId values.

Option C (max delivery count) defines how many times the Service Bus will attempt to deliver a message before moving it to the dead-letter queue. Setting this to 1 would cause messages to be dead-lettered after a single failure, which is too aggressive and doesn’t address ordering.

Option D (auto-complete) automatically completes messages after the function executes successfully, but it doesn’t provide ordering guarantees or prevent race conditions when multiple instances process messages concurrently.

For implementing this in Azure Functions, you would also need to set IsSessionsEnabled = true in the ServiceBusTrigger attribute and handle session management in your code. The combination of queue-level session enablement and proper function configuration ensures both ordered processing and exactly-once semantics.

Questions 162

You need to implement a solution that allows multiple Azure Functions to share state information. The solution must minimize latency and support high throughput. What should you use?

A) Azure Table Storage

B) Azure Redis Cache

C) Azure Cosmos DB

D) Azure SQL Database

Answer: B

Explanation:

Azure Redis Cache is the optimal solution for sharing state information between multiple Azure Functions when minimizing latency and supporting high throughput are primary requirements. Redis is an in-memory data store that provides sub-millisecond response times, making it ideal for scenarios requiring fast access to shared state.

Redis Cache excels in this scenario because it operates entirely in memory, eliminating disk I/O bottlenecks. It supports various data structures like strings, hashes, lists, and sets, allowing flexible state management patterns. Azure Redis Cache also provides features like data persistence, clustering for scalability, and geo-replication for high availability.

For Azure Functions specifically, Redis Cache is commonly used for implementing distributed locks, session state management, rate limiting counters, and caching frequently accessed data. The low latency ensures that functions don’t experience performance degradation when accessing shared state, and the high throughput capability handles concurrent requests from multiple function instances efficiently.

Option A (Azure Table Storage) is a NoSQL key-value store that offers good scalability and cost-effectiveness but has higher latency compared to Redis. Typical response times range from single-digit milliseconds to tens of milliseconds.

Option C (Azure Cosmos DB) provides global distribution, multiple consistency models, and low latency, but it’s primarily designed for document storage with more complex querying capabilities. While it can be used for state management, it introduces higher costs and complexity than necessary for simple state sharing scenarios.

Option D (Azure SQL Database) is a relational database that provides ACID transactions and complex querying but has higher latency than in-memory solutions.

Questions 163

You are developing a containerized application that will run on Azure Kubernetes Service (AKS). The application must securely access secrets stored in Azure Key Vault. What should you implement?

A) Azure Key Vault FlexVolume driver

B) Secrets Store CSI driver

C) Environment variables in deployment YAML

D) Kubernetes Secrets

Answer: B

Explanation:

The Secrets Store CSI driver is the current recommended approach for integrating Azure Key Vault with Azure Kubernetes Service. CSI stands for Container Storage Interface, which is a standard for exposing storage systems to containerized workloads. The Secrets Store CSI driver mounts secrets, keys, and certificates from Azure Key Vault directly into pods as volumes.

This solution provides several advantages including automatic rotation of secrets without restarting pods, support for managed identities for authentication, and the ability to sync secrets to Kubernetes secrets if needed. The CSI driver architecture is more maintainable and follows Kubernetes standards better than previous solutions.

When implementing the Secrets Store CSI driver, you create a SecretProviderClass resource that defines which Key Vault to use, which secrets to retrieve, and how to authenticate. The driver then mounts these secrets as files in the pod’s filesystem, making them accessible to your application. This approach ensures secrets are never exposed in pod specifications or container images.

Option A (Azure Key Vault FlexVolume driver) was the previous solution for this scenario but has been deprecated in favor of the CSI driver. While it still works, Microsoft recommends migrating to the CSI driver for better support and features.

Option C (environment variables in deployment YAML) would require hardcoding secrets or references in YAML files, which is a security risk. Secrets would be visible in version control and pod specifications, violating security best practices.

Option D (Kubernetes Secrets) stores secrets within the Kubernetes cluster itself, but they are only base64 encoded by default, not encrypted at rest unless additional configuration is applied. This doesn’t provide the same level of security as Azure Key Vault.

Questions 164

You are implementing Azure API Management to expose backend APIs. You need to implement rate limiting based on subscription keys. What should you configure?

A) JWT validation policy

B) Rate limit policy

C) Quota policy

D) IP filtering policy

Answer: B

Explanation:

The rate limit policy in Azure API Management is specifically designed to control the number of API calls allowed within a specific time period based on subscription keys. This policy prevents API abuse and ensures fair usage across different subscribers while protecting backend services from being overwhelmed.

When you configure a rate limit policy, you specify the number of calls allowed and the renewal period. For example, you might allow 100 calls per minute per subscription key. The policy tracks call counts for each subscription key separately and returns a 429 Too Many Requests response when the limit is exceeded. Headers in the response indicate the limit, remaining calls, and reset time.

The rate limit policy can be applied at different scopes including global, product, API, or operation level, giving you flexibility in how you enforce limits. You can also customize the policy to apply different limits based on criteria like the calling IP address or claims in JWT tokens. The policy supports burst allowances through the counter-key attribute.

Option A (JWT validation policy) is used for authentication and authorization by validating JSON Web Tokens, but it doesn’t provide rate limiting functionality. It ensures that incoming requests contain valid tokens with required claims.

Option C (quota policy) is similar to rate limiting but works differently. While rate limiting controls the call rate within short time windows, quotas control total usage over longer periods like weeks or months. Quotas are typically used for billing purposes.

Option D (IP filtering policy) controls access based on IP addresses, allowing or denying requests from specific IP ranges. This is used for security purposes rather than rate limiting.

Questions 165

You are developing an Azure Function with a Blob trigger. The function must process only new blobs and ignore existing blobs when deployed. What should you do?

A) Set the BlobTrigger attribute’s Source parameter to EventGrid

B) Configure the host.json file with a blob trigger configuration

C) Initialize the Azure Storage account connection before deployment

D) Create a checkpoint file in blob storage

Answer: C

Explanation:

When deploying an Azure Function with a Blob trigger for the first time, initializing the Azure Storage account connection before deployment ensures that the function only processes new blobs created after the deployment. The Blob trigger uses a receipt system stored in azure-webjobs-hosts container to track which blobs have been processed.

By establishing the connection and allowing the function to initialize its receipt tracking system before any processing begins, you create a baseline. The function records the current state of the blob container and only triggers on blobs created or modified after this initialization point. This prevents the function from processing historical data that existed before deployment.

The receipt system works by storing metadata about processed blobs in the azure-webjobs-hosts container. Each time a blob is processed, a receipt is created. When the function scans for new blobs, it checks these receipts to determine which blobs are new. During initial deployment, if the connection is initialized properly, the system marks existing blobs as already seen.

Option A (setting Source to EventGrid) changes how the trigger detects new blobs by using Event Grid instead of polling, which is more efficient but doesn’t specifically address ignoring existing blobs during initial deployment.

Option B (configuring host.json) allows you to adjust trigger behavior settings like polling intervals and batch sizes, but it doesn’t provide a mechanism to distinguish between existing and new blobs during initial deployment.

Option D (creating a checkpoint file) isn’t a standard approach for Blob triggers. While checkpoints are used in Event Hub triggers to track progress, Blob triggers use the receipt system instead.

Questions 166

You need to implement distributed tracing for microservices running in Azure. The solution must provide end-to-end visibility across multiple services. What should you use?

A) Azure Monitor Logs

B) Azure Application Insights

C) Azure Log Analytics

D) Azure Activity Log

Answer: B

Explanation:

Azure Application Insights is the comprehensive solution for distributed tracing in Azure, providing end-to-end visibility across microservices architectures. It implements distributed tracing using correlation IDs and operation IDs to track requests as they flow through multiple services, creating a complete picture of request execution paths.

Application Insights uses the W3C Trace Context standard to propagate correlation information across service boundaries. When a request enters your system, Application Insights generates a unique operation ID and traces the request as it moves through different services. Each service logs its telemetry with the same operation ID, allowing you to visualize the entire transaction flow in the Application Map and Transaction Search features.

The Application Map feature provides a visual representation of your microservices architecture, showing dependencies between services, performance metrics, and failure rates. You can see how services interact and identify bottlenecks or failing dependencies. The end-to-end transaction view shows the complete request path with timing information for each service call, making it easy to identify performance issues.

Application Insights also provides distributed tracing through its integration with OpenTelemetry, allowing you to use standard instrumentation libraries. The SDK automatically tracks HTTP requests, database calls, and external dependencies, requiring minimal code changes to implement comprehensive tracing.

Option A (Azure Monitor Logs) is the underlying platform for storing and querying log data, but it doesn’t provide the specialized distributed tracing capabilities and visualization tools that Application Insights offers.

Option C (Azure Log Analytics) is the query engine for Azure Monitor Logs and provides powerful querying capabilities, but it doesn’t include the automatic correlation, Application Map, or distributed tracing features specific to Application Insights.

Option D (Azure Activity Log) tracks Azure resource management operations and administrative activities but doesn’t provide application-level distributed tracing across microservices.

Questions 167

You are developing an application that stores sensitive data in Azure Cosmos DB. You must implement encryption for data at rest and in transit. What should you configure?

A) Enable Azure Cosmos DB firewall rules

B) Configure virtual network service endpoints

C) Implement customer-managed keys with Azure Key Vault

D) Enable Azure Private Link

Answer: C

Explanation:

Implementing customer-managed keys with Azure Key Vault provides the highest level of control over encryption for Azure Cosmos DB data at rest. By default, Azure Cosmos DB encrypts all data at rest using Microsoft-managed keys, but customer-managed keys give you complete control over the encryption keys, including rotation policies and access management.

When you configure customer-managed keys, Azure Cosmos DB uses Azure Key Vault to store and manage your encryption keys. The service uses these keys to encrypt the database encryption keys, which in turn encrypt your actual data. This double encryption approach provides an additional security layer. You maintain full control over key lifecycle, including creation, rotation, and revocation.

For data in transit, Azure Cosmos DB enforces TLS 1.2 or higher for all connections by default, ensuring encryption during transmission. The combination of customer-managed keys for data at rest and enforced TLS for data in transit provides comprehensive encryption coverage. You can also configure minimum TLS version requirements to ensure clients use secure protocols.

The implementation involves creating a Key Vault, generating or importing encryption keys, granting Azure Cosmos DB access to the Key Vault through managed identity, and configuring the Cosmos DB account to use customer-managed keys. Once configured, all encryption operations use your keys transparently.

Option A (firewall rules) controls network access to Azure Cosmos DB by allowing only specific IP addresses or ranges, but this is an access control feature rather than an encryption mechanism.

Option B (virtual network service endpoints) secures network traffic by keeping it within the Azure backbone network and enabling private IP addresses for access, improving security but not providing encryption configuration.

Option D (Private Link) creates private endpoints for Cosmos DB in your virtual network, ensuring traffic never traverses the public internet, but like the other options, this is about network security rather than encryption configuration.

Questions 168

You need to deploy an Azure Function app using Azure DevOps. The deployment must support slot swapping and minimize downtime. What should you include in your pipeline?

A) Azure App Service Deploy task with slot deployment option

B) Azure Resource Manager template deployment task

C) Azure CLI task with az functionapp deployment commands

D) Azure PowerShell task with swap operation

Answer: A

Explanation:

The Azure App Service Deploy task with slot deployment option is the most comprehensive and recommended approach for deploying Azure Function apps with slot swapping capabilities in Azure DevOps. This task is specifically designed for App Service and Function App deployments and includes built-in support for deployment slots.

When you configure the Azure App Service Deploy task, you can specify a deployment slot such as staging or testing. The task deploys your function app code to this slot first, allowing you to test and validate the deployment before affecting production. The task includes options for slot-specific configuration settings, ensuring that environment-specific values are maintained correctly.

After deploying to a slot, you can use the same task or a separate swap operation to exchange the slot with production. The swap operation is atomic and instantaneous, minimizing downtime. During the swap, Azure warms up the new version before redirecting traffic, ensuring that cold start issues don’t impact users. If issues are detected, you can quickly swap back to the previous version.

The task also supports advanced features like Run from Package deployment, which keeps function app files read-only and improves cold start performance. You can configure pre-swap and post-swap validation, auto-swap for continuous deployment scenarios, and slot-specific application settings that don’t swap with the code.

Option B (ARM template deployment) can deploy function apps but is more focused on infrastructure provisioning rather than application deployment. While it can create slots, it doesn’t provide the same deployment workflow optimization.

Option C (Azure CLI task) provides flexibility through command-line operations but requires more manual scripting to implement the complete deployment workflow including validation and error handling.

Option D (PowerShell task) offers similar capabilities to CLI but requires PowerShell script development and doesn’t provide the structured, task-based approach of the App Service Deploy task.

Questions 169

You are implementing Azure Event Grid to handle events from multiple sources. You need to filter events based on specific criteria before sending them to subscribers. What should you configure?

A) Event Grid domain with filtering

B) Event subscription with advanced filters

C) Logic App with condition action

D) Azure Function with event filtering logic

Answer: B

Explanation:

Event subscription with advanced filters is the native and most efficient way to filter events in Azure Event Grid before they reach subscribers. Advanced filters allow you to specify filtering criteria directly in the event subscription configuration, ensuring that only relevant events are delivered, reducing unnecessary processing and costs.

Azure Event Grid supports multiple filter types including subject filters and advanced filters. Subject filters match the beginning or end of the event subject field, which is simple but limited. Advanced filters provide much more powerful filtering capabilities, allowing you to filter on any property in the event data using operators like NumberIn, NumberNotIn, StringContains, StringBeginsWith, BoolEquals, and more.

You can combine multiple filter conditions using AND logic, creating complex filtering rules. For example, you might filter events where the event type equals BlobCreated AND the data.blobType equals BlockBlob AND the subject begins with a specific path. Event Grid evaluates these filters at the platform level before delivering events, making it highly efficient.

Advanced filters support up to 25 filter conditions per subscription and can filter on nested properties in the event data using dot notation. The filtering happens server-side, meaning filtered-out events don’t count against your subscriber’s processing capacity or costs. This is particularly important when dealing with high-volume event sources.

Option A (Event Grid domain with filtering) is about organizing multiple event subscriptions under a single domain for management purposes, but filtering still happens at the subscription level through advanced filters.

Option C (Logic App with condition action) implements filtering in the subscriber application itself, meaning all events are delivered to the Logic App first, then filtered. This increases costs and latency.

Option D (Azure Function with filtering logic) similarly implements filtering in application code, requiring all events to be delivered and processed, which is less efficient than platform-level filtering.

Questions 170

You are developing a web application that uses Azure AD B2C for authentication. Users must be able to sign in using social identity providers. What should you configure?

A) Custom policies with identity provider integration

B) User flows with social identity provider connections

C) Application registration with federation settings

D) Conditional Access policies with external identities

Answer: B

Explanation:

User flows with social identity provider connections represent the recommended and straightforward approach for implementing social identity provider sign-in in Azure AD B2C. User flows are pre-built, configurable authentication journeys that handle the complete sign-in, sign-up, and profile editing experiences with minimal configuration required.

Azure AD B2C supports multiple social identity providers out of the box, including Microsoft Account, Google, Facebook, Amazon, LinkedIn, Twitter, and GitHub. To configure social sign-in, you first register your application with the social identity provider to obtain client credentials. Then, in Azure AD B2C, you create an identity provider connection using these credentials and add it to your user flow.

When you add social identity providers to a user flow, Azure AD B2C automatically handles the OAuth/OpenID Connect protocol negotiation, token exchange, and user profile retrieval. Users see the social provider as a sign-in option on your authentication pages, and when they choose it, they’re redirected to the provider’s sign-in page, authenticate there, and return to your application with an Azure AD B2C token.

User flows also support combining social identity providers with local accounts, allowing users to choose their preferred authentication method. You can customize the user experience through page layouts and branding while maintaining the security and protocol handling that Azure AD B2C provides. The solution scales automatically and handles security updates.

Option A (custom policies) provide more flexibility and control over the authentication journey, allowing complex scenarios like custom attribute collection or integration with identity providers not natively supported. However, they require more expertise and are unnecessary for standard social identity provider integration.

Option C (application registration) is where you configure your application’s OAuth settings but doesn’t directly enable social identity provider sign-in for users.

Option D (Conditional Access policies) control access based on conditions like location or device state but don’t configure identity provider connections.

Questions 171

You need to implement a message-based solution where messages can be delivered to multiple subscribers and each subscriber receives all messages. What should you use?

A) Azure Service Bus Queue

B) Azure Service Bus Topic with multiple subscriptions

C) Azure Event Hub

D) Azure Storage Queue

Answer: B

Explanation:

Azure Service Bus Topic with multiple subscriptions is the ideal solution for publish-subscribe messaging patterns where multiple subscribers need to receive all messages. Topics provide one-to-many communication, allowing a single message sent to the topic to be delivered to multiple subscriptions independently.

When you create a Service Bus topic, you can add multiple subscriptions to it. Each subscription acts as an independent queue that receives a copy of every message sent to the topic. Subscribers connect to their respective subscriptions and consume messages independently. This architecture supports scenarios where different services need to process the same event in different ways.

Service Bus topics support advanced filtering capabilities at the subscription level. You can configure SQL filters, correlation filters, or boolean filters on each subscription to receive only specific messages based on properties or content. This allows selective message delivery while maintaining the publish-subscribe pattern. Each subscription can have its own message time-to-live, dead-lettering policy, and forwarding rules.

The Service Bus topic also provides enterprise messaging features including transactions, duplicate detection, sessions for ordered processing, and scheduled message delivery. These features make it suitable for reliable business-critical messaging scenarios where message delivery guarantees are important.

Option A (Service Bus Queue) implements point-to-point messaging where each message is consumed by exactly one receiver, not suitable for delivering messages to multiple subscribers simultaneously.

Option C (Event Hub) is designed for high-throughput streaming scenarios with millions of events per second, using a partitioned consumer model. While multiple consumers can read from Event Hub, it’s optimized for streaming analytics rather than traditional message delivery patterns.

Option D (Storage Queue) provides simple queue semantics for point-to-point messaging with at-least-once delivery but doesn’t support publish-subscribe patterns or multiple subscribers receiving the same message.

Questions 172

You are developing an Azure Logic App that processes files from Azure Blob Storage. The Logic App must trigger only when a new file is added to a specific container. What should you configure?

A) Recurrence trigger with Blob storage action

B) HTTP request trigger with webhook

C) Event Grid trigger with blob created event

D) Blob polling trigger

Answer: C

Explanation:

The Event Grid trigger with blob created event is the most efficient and recommended approach for triggering Azure Logic Apps when new files are added to Azure Blob Storage. Event Grid provides event-driven architecture where storage events are published immediately when they occur, resulting in near-real-time processing without polling overhead.

When you configure an Event Grid trigger in your Logic App, Azure Storage publishes events to Event Grid whenever blobs are created, deleted, or modified. The Event Grid trigger subscribes to these events and executes the Logic App workflow immediately when a matching event occurs. This approach is more efficient than polling because it eliminates constant storage account queries and reduces latency.

Event Grid triggers support filtering capabilities, allowing you to specify which blob containers and blob name patterns should trigger the workflow. You can filter by event type, subject, and data properties, ensuring that only relevant events trigger your Logic App. For example, you might trigger only on BlobCreated events in a specific container with filenames ending in a particular extension.

The solution scales automatically as Event Grid handles event delivery with built-in retry logic and dead-lettering for failed deliveries. Event Grid guarantees at-least-once delivery, ensuring that events aren’t lost. The service provides high throughput, handling millions of events per second, making it suitable for high-volume scenarios.

Option A (Recurrence trigger with Blob storage action) requires the Logic App to poll the storage account at regular intervals, which is less efficient, increases costs, and introduces latency between file creation and processing.

Option B (HTTP request trigger with webhook) requires custom code to implement the webhook endpoint and register it with storage events, adding unnecessary complexity.

Option D (Blob polling trigger) isn’t a native Logic Apps trigger type, though similar functionality can be achieved through recurrence triggers with polling logic.

Questions 173

You need to implement caching for an ASP.NET Core web application running on Azure App Service to improve performance. The cache must be shared across multiple instances. What should you use?

A) In-memory caching with IMemoryCache

B) Azure Redis Cache with distributed caching

C) Output caching in ASP.NET Core

D) Azure CDN for static content

Answer: B

Explanation:

Azure Redis Cache with distributed caching is the correct solution for implementing shared caching across multiple App Service instances. Redis provides a centralized, in-memory cache that all application instances can access, ensuring consistent cached data regardless of which instance processes a user request.

When implementing distributed caching with Redis in ASP.NET Core, you use the IDistributedCache interface, which provides a standardized API for cache operations. The Microsoft.Extensions.Caching.StackExchangeRedis package implements this interface for Redis. You configure the cache in your application’s startup by specifying the Redis connection string and optional configuration like database number or key prefix.

Azure Redis Cache offers multiple tiers including Basic, Standard, and Premium, with the Standard and Premium tiers providing high availability through replication. The service handles Redis server management, patching, and monitoring, allowing you to focus on application logic. Redis supports various data structures beyond simple key-value pairs, including lists, sets, sorted sets, and hashes.

The distributed cache is essential in scaled-out scenarios where multiple App Service instances handle requests. Without shared caching, each instance would maintain its own cache, leading to cache inconsistency and increased backend load. Redis ensures that cached data is available to all instances, improving performance and reducing database queries.

Option A (In-memory caching with IMemoryCache) stores cache data in the application process memory, which isn’t shared across multiple instances. Each instance maintains its own cache, leading to inconsistency and cache duplication.

Option C (Output caching) caches the complete HTTP response but in ASP.NET Core’s default implementation, this cache is also local to each instance and doesn’t provide shared caching across instances.

Option D (Azure CDN) is excellent for caching static content like images, JavaScript, and CSS files at edge locations globally, but it doesn’t provide application-level caching for dynamic content.

Questions 174

You are implementing Azure Cognitive Services in your application. You need to ensure that API keys are not hardcoded in your application code. What should you implement?

A) Store keys in application configuration files

B) Use Azure Key Vault to store keys and access them via managed identity

C) Store keys in environment variables

D) Use Azure App Configuration service

Answer: B

Explanation:

Using Azure Key Vault to store Cognitive Services API keys and accessing them via managed identity represents the most secure approach for managing sensitive credentials. This solution eliminates the need to hardcode keys in application code or configuration files while providing enterprise-grade security and access control.

Azure Key Vault is specifically designed for secrets management, providing encrypted storage for API keys, connection strings, certificates, and other sensitive data. When you store Cognitive Services keys in Key Vault, you gain benefits including encryption at rest and in transit, access auditing, and granular access control through Azure RBAC or access policies.

Managed identity eliminates the need for credentials in your application code entirely. When you enable managed identity on your Azure resource (like App Service, Azure Function, or VM), Azure automatically manages the credential lifecycle. Your application authenticates to Key Vault using this managed identity without requiring any secrets in code. The Azure SDK provides seamless integration for retrieving secrets.

The implementation involves creating a Key Vault, storing your Cognitive Services keys as secrets, enabling managed identity on your compute resource, and granting the managed identity permission to read secrets. In your application code, you use the Azure SDK to retrieve secrets using DefaultAzureCredential, which automatically uses the managed identity when running in Azure.

Option A (configuration files) still exposes keys in files that could be accidentally committed to source control or accessed by unauthorized users, creating security vulnerabilities.

Option C (environment variables) is better than hardcoding but still requires the keys to be set somewhere, often in deployment configurations, and they’re visible to anyone with access to the environment settings.

Option D (App Configuration service) is designed for application settings and feature flags rather than secrets management, though it can reference Key Vault secrets.

Questions 175

You are developing an Azure Function that processes data from Azure Event Hub. The function must process events in parallel while maintaining partition ordering. What should you configure?

A) Set the cardinality property to Many in the EventHubTrigger attribute

B) Configure MaxBatchSize in host.json

C) Enable parallel execution in function.json

D) Set ProcessEventsOptions.MaxBatchSize in code

Answer: A

Explanation:

Setting the cardinality property to Many in the EventHubTrigger attribute enables parallel processing of events from different Event Hub partitions while maintaining the ordering guarantee within each partition. This configuration allows the Azure Functions runtime to process multiple batches of events concurrently, optimizing throughput.

Event Hubs organize events into partitions, and within each partition, events maintain their order. By setting cardinality to Many, the function receives an array of events in each invocation, and the runtime can invoke the function multiple times in parallel for different partitions. Each function invocation processes a batch of events from a single partition, preserving the ordering guarantee that Event Hub provides.

This approach significantly improves processing throughput compared to single-event processing (cardinality set to One), especially for high-volume scenarios. The function can process hundreds or thousands of events per invocation, reducing the overhead of function invocations and improving overall efficiency. The runtime automatically manages parallel execution across partitions.

The configuration works in conjunction with other Event Hub trigger settings like MaxBatchSize, which determines how many events are included in each batch. The combination of parallel partition processing and batch processing provides optimal performance while respecting Event Hub’s ordering semantics.

Option B (configuring MaxBatchSize in host.json) controls the maximum number of events per batch but doesn’t directly enable parallel processing across partitions. It works together with cardinality settings but isn’t sufficient alone.

Option C (enabling parallel execution in function.json) isn’t a valid configuration option for Event Hub triggers. Parallel execution is controlled through the cardinality property in the trigger attribute.

Option D (setting ProcessEventsOptions.MaxBatchSize) is relevant when using the Event Hub SDK directly but not when configuring Azure Function triggers, which use the Function-specific configuration approach.

Questions 176

You need to implement authorization in an ASP.NET Core Web API that validates JWT tokens issued by Azure AD. What should you configure in Startup.cs?

A) AddJwtBearer with Azure AD settings and authority URL

B) AddCookie with authentication properties

C) AddOpenIdConnect with client credentials

D) AddOAuth with authorization endpoint

Answer: A

Explanation:

Configuring AddJwtBearer with Azure AD settings and authority URL is the correct approach for implementing JWT token validation in ASP.NET Core Web APIs. The JWT Bearer authentication handler validates incoming tokens, verifies their signatures against Azure AD’s public keys, and extracts user claims for authorization decisions.

When you configure JWT Bearer authentication, you specify the authority URL pointing to your Azure AD tenant, which the middleware uses to discover metadata and obtain token signing keys. The configuration includes settings like the valid audience (your API’s Application ID URI) and the valid issuer. The middleware automatically downloads Azure AD’s public signing keys and caches them.

The AddJwtBearer method configures the authentication pipeline to expect JWT tokens in the Authorization header with the Bearer scheme. When a request arrives, the middleware validates the token’s signature, expiration, audience, and issuer. If validation succeeds, the middleware creates a ClaimsPrincipal from the token claims, which your API can use for authorization decisions through policies or role-based access control.

The solution supports advanced scenarios including token validation parameter customization, event handlers for authentication events, and integration with Azure AD B2C or other OpenID Connect providers. You can validate additional claims, implement custom validation logic, and handle authentication failures through configuration options.

Option B (AddCookie) is used for browser-based applications where the authentication cookie stores session information, not for API token validation scenarios where each request includes a bearer token.

Option C (AddOpenIdConnect) is designed for interactive sign-in flows in web applications where users are redirected to Azure AD for authentication, not for validating tokens in API requests.

Option D (AddOAuth) provides generic OAuth integration but doesn’t include the JWT-specific validation logic and Azure AD integration that AddJwtBearer provides out of the box.

Questions 177

You are implementing Azure Monitor to track custom metrics for your application. The solution must allow querying and alerting on these metrics. What should you use?

A) Application Insights TrackMetric method

B) Azure Monitor custom metrics REST API

C) Azure Log Analytics custom logs

D) Event Hub with Stream Analytics

Answer: A

Explanation:

The Application Insights TrackMetric method is the recommended approach for tracking custom metrics in Azure applications. This method integrates seamlessly with Application Insights’ metrics system, allowing you to emit custom measurements that can be queried, visualized, and used for alerting alongside standard Application Insights metrics.

When you use TrackMetric, you specify a metric name and value, along with optional properties for dimensions. Application Insights aggregates these metrics over time intervals, calculating statistics like count, sum, min, max, and average. The metrics appear in the Metrics Explorer alongside standard metrics like response time and request rate, providing a unified view of application performance.

Custom metrics support multi-dimensional data through properties, allowing you to segment metrics by various criteria. For example, a custom metric tracking order processing time could include dimensions for region, product category, and payment method. You can then analyze the metric across these dimensions in Metrics Explorer or create alerts based on specific dimensional values.

Application Insights provides powerful querying capabilities through Kusto Query Language in Log Analytics, allowing complex analysis of custom metrics. You can create visualizations in Azure Dashboards, set up metric alerts with dynamic thresholds, and integrate metrics into Azure Monitor workbooks for comprehensive reporting.

Option B (Azure Monitor custom metrics REST API) allows direct submission of metrics to Azure Monitor but requires more manual configuration and doesn’t provide the automatic correlation with other Application Insights telemetry like requests, dependencies, and exceptions.

Option C (Azure Log Analytics custom logs) stores custom log data that can be queried but isn’t optimized for metric time-series data and doesn’t provide the same aggregation and alerting capabilities as metrics.

Option D (Event Hub with Stream Analytics) is designed for real-time stream processing rather than application metrics tracking.

Questions 178

You need to deploy an Azure Resource Manager template that creates multiple resources with dependencies. One resource must be created only if a parameter value equals ‘true’. What should you use?

A) Conditions in the resource definition

B) Copy loops with conditional logic

C) Nested templates with parameters

D) Resource dependencies with dependsOn

Answer: A

Explanation:

Conditions in ARM template resource definitions allow you to control whether a resource is created based on parameter values or other expressions. The condition property evaluates to true or false, and Azure Resource Manager only creates the resource when the condition is true, providing flexible deployment control.

To implement conditional resource creation, you add a condition property to the resource definition with an expression that evaluates the parameter. For example, condition: [parameters(‘deployOptionalResource’)] creates the resource only when the parameter is true. The condition can use any valid ARM template expression including comparison operators, logical operators, and template functions.

Conditions work seamlessly with resource dependencies. If a conditional resource isn’t created because its condition is false, Azure Resource Manager automatically handles dependencies pointing to that resource. Resources depending on the conditional resource won’t fail deployment; they simply skip the dependency.

You can combine conditions with other ARM template features like copy loops, nested templates, and outputs. Conditional outputs only appear when their associated resource is created. This pattern is useful for creating environment-specific resources, implementing feature flags in infrastructure, or creating optional monitoring or backup resources.

Option B (copy loops with conditional logic) creates multiple instances of resources but doesn’t provide true conditional deployment. Copy loops always create the specified number of instances, though you could use conditions within the loop.

Option C (nested templates with parameters) allow modular template composition but don’t provide conditional resource creation unless combined with conditions. You would still need conditions to control whether the nested template deployment occurs.

Option D (resource dependencies with dependsOn) controls deployment order by specifying that one resource must be created before another, but it doesn’t provide conditional creation logic. Dependencies ensure proper sequencing regardless of conditions.

Questions 179

You are developing an application that needs to process large files uploaded to Azure Blob Storage. The processing takes several minutes. What pattern should you implement?

A) Synchronous processing in the upload API

B) Queue-based asynchronous processing with Azure Storage Queue

C) Direct processing in Azure Function with Blob trigger

D) Polling-based processing with scheduled jobs

Answer: B

Explanation:

Queue-based asynchronous processing with Azure Storage Queue is the optimal pattern for handling long-running file processing operations. This pattern decouples the upload operation from processing, allowing the upload API to return immediately while processing happens in the background, improving user experience and application scalability.

The implementation involves the upload API writing file metadata to an Azure Storage Queue after successfully uploading the file to Blob Storage. A separate worker process or Azure Function with a Queue trigger picks up messages from the queue and processes the files. This architecture provides several benefits including automatic retry handling, poison message management, and the ability to scale processing independently from the upload service.

Azure Storage Queues provide reliable message delivery with at-least-once semantics, ensuring that processing messages aren’t lost even if the worker fails. If processing fails after the maximum retry attempts, messages automatically move to a poison queue for investigation. The queue also provides visibility timeout, which locks messages during processing and returns them to the queue if processing doesn’t complete within the timeout.

This pattern allows you to scale the processing tier based on queue length, adding more worker instances when the queue grows and reducing them when it’s empty. You can implement priority processing by using multiple queues, throttle processing rates to avoid overwhelming downstream systems, and easily monitor processing status through queue metrics.

Option A (synchronous processing) blocks the upload API during processing, leading to timeouts, poor user experience, and resource exhaustion under load. Long-running synchronous operations are not scalable.

Option C (direct processing in Azure Function with Blob trigger) works for shorter operations but has limitations for long-running processes. Azure Functions have execution time limits, and blob triggers have polling latency, making this less suitable for immediate, long-running processing.

Option D (polling-based processing with scheduled jobs) introduces unnecessary delays and inefficiency compared to event-driven processing through queues. Jobs run on a schedule regardless of workload, wasting resources during idle periods.

Questions 180

You need to implement Azure Active Directory authentication in a single-page application (SPA). The solution must follow security best practices. What should you use?

A) Implicit grant flow with access tokens

B) Authorization code flow with PKCE

C) Client credentials flow

D) Resource owner password credentials flow

Answer: B

Explanation:

Authorization code flow with PKCE (Proof Key for Code Exchange) is the current security best practice for implementing Azure AD authentication in single-page applications. This flow provides better security than the implicit grant flow by using authorization codes instead of returning tokens directly in the URL, and PKCE prevents authorization code interception attacks.

The authorization code flow with PKCE works by having the SPA generate a cryptographic random string called a code verifier and its hashed value called the code challenge. The SPA sends the code challenge when requesting the authorization code from Azure AD. When exchanging the authorization code for tokens, the SPA sends the code verifier, and Azure AD verifies it matches the original code challenge.

This approach provides several security advantages. Tokens never appear in the browser URL, reducing exposure risks. The authorization code is single-use and short-lived, limiting its value to attackers. PKCE prevents attacks where malicious apps intercept authorization codes because they cannot complete the exchange without the code verifier.

Microsoft’s MSAL.js 2.0 library implements this flow for SPAs, handling the PKCE generation and token management automatically. The library manages token caching, silent token renewal using refresh tokens, and proper error handling. It also supports advanced scenarios like multi-account sign-in and token claims validation.

Option A (implicit grant flow) was previously the standard for SPAs but is now considered less secure. It returns tokens directly in the URL fragment, exposing them to potential interception. Microsoft now recommends migrating from implicit flow to authorization code flow with PKCE.

Option C (client credentials flow) is designed for service-to-service authentication where there’s no user context, not for SPAs where users sign in interactively. This flow requires a client secret, which cannot be securely stored in browser-based applications.

Option D (resource owner password credentials flow) requires the application to handle user credentials directly, which violates security best practices and doesn’t support modern authentication features like MFA or conditional access.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!