Microsoft AZ-204 Developing Solutions for Azure Exam Dumps and Practice Test Questions Set6 Q101-120

Visit here for our full Microsoft AZ-204 exam dumps and practice test questions.

Q101 

You are developing an application that needs to store user session data with automatic expiration after 30 minutes of inactivity. The solution must support high concurrency. What should you use?

A) Azure SQL Database with custom expiration logic

B) Azure Table Storage with TTL

C) Azure Cache for Redis with sliding expiration

D) Azure Cosmos DB with TTL

Answer: C

Explanation:

Azure Cache for Redis with sliding expiration is the optimal solution for session management with automatic timeout based on inactivity. Redis provides in-memory performance with native session support and sliding expiration patterns that automatically extend timeout with each access.

Redis offers sub-millisecond response times essential for session operations that occur on every request. The sliding expiration pattern resets the timeout each time the session is accessed, ensuring active users remain logged in while inactive sessions expire automatically after 30 minutes. This is implemented using the EXPIRE command with each GET operation, or through IDistributedCache in ASP.NET Core which handles this automatically.

The high concurrency support is a key advantage. Redis handles thousands of concurrent operations per second on a single instance, with Premium tier supporting clustering for even higher throughput. Session data is stored in memory with optional persistence to disk, providing both speed and durability. Redis supports atomic operations ensuring session updates don’t conflict even under heavy concurrent load.

Integration with ASP.NET Core is seamless through the distributed session state provider. Configuration is simple – you register Redis as the session store and the framework handles serialization, expiration management, and session locking automatically. Redis also supports advanced scenarios like session locking to prevent concurrent modifications and pub/sub for session invalidation across instances.

Azure SQL Database introduces latency for session operations and requires custom implementation of expiration logic through background jobs or computed columns. Table Storage and Cosmos DB support TTL (time-to-live) but this is absolute expiration from creation time, not sliding expiration based on last access. They also lack the performance characteristics needed for high-frequency session operations.

Q102 

You need to implement a webhook receiver in Azure that processes GitHub events. The solution must validate webhook signatures for security. What should you implement?

A) Azure Logic App with HTTP trigger

B) Azure Function with HTTP trigger and signature validation

C) Azure API Management with validation policy

D) Azure Event Grid custom topic

Answer: B

Explanation:

Azure Function with HTTP trigger and signature validation provides the secure and flexible solution for receiving and validating GitHub webhooks. Functions offer code-based validation of webhook signatures while maintaining the simplicity and cost-effectiveness of serverless architecture.

GitHub webhooks include an HMAC-SHA256 signature in the X-Hub-Signature-256 header, computed using a shared secret. Your Function must validate this signature to ensure the request genuinely comes from GitHub and hasn’t been tampered with. The validation process involves computing the HMAC of the request body using your shared secret and comparing it with the provided signature using a constant-time comparison to prevent timing attacks.

The implementation is straightforward in Azure Functions. You configure an HTTP trigger accepting POST requests, retrieve the signature from headers, compute the expected HMAC using the secret stored in Key Vault or App Settings, and validate before processing the payload. If validation fails, return 401 Unauthorized immediately. This code-level control enables sophisticated validation logic beyond simple signature checking.

Azure Functions provide automatic scaling to handle webhook bursts during heavy repository activity. The Consumption plan offers cost efficiency since you only pay for actual executions. Functions integrate seamlessly with other Azure services – you can publish validated events to Service Bus, store data in Cosmos DB, or trigger deployments through Azure DevOps.

Logic Apps can receive HTTP requests but signature validation requires custom code actions or Azure Functions anyway, adding unnecessary complexity. API Management validation policies can check headers but don’t support the HMAC computation needed for GitHub signatures. Event Grid custom topics require you to adapt GitHub webhooks to Event Grid schema rather than processing native GitHub events.

Q103 

You are implementing Azure Application Insights for a microservices application. You need to track custom business metrics like order completion rate. What should you use?

A) Custom events with TrackEvent

B) Custom metrics with TrackMetric

C) Traces with TrackTrace

D) Dependencies with TrackDependency

Answer: B

Explanation:

Custom metrics with TrackMetric is specifically designed for tracking numerical business metrics and KPIs. TrackMetric enables you to send aggregated measurements to Application Insights where they can be charted, analyzed, and alerted on using the metrics explorer.

TrackMetric accepts a metric name and numerical value, with optional properties for dimensions that enable filtering and grouping. For order completion rate, you would calculate the rate in your code and send the value periodically using telemetryClient.TrackMetric. Application Insights automatically aggregates these measurements, computing min, max, sum, and count over time intervals.

The key advantage is performance and cost efficiency. Metrics are pre-aggregated before transmission, reducing data volume compared to sending individual events. Application Insights stores metrics separately from telemetry events in an optimized time-series database, enabling fast queries over long time periods. You can create real-time dashboards showing business KPIs without impacting application performance.

Metrics support multi-dimensional analysis through properties. You can track order completion rate segmented by region, product category, or customer tier, then filter and split charts by these dimensions. The metrics explorer provides powerful visualization capabilities including line charts, bar charts, and grid views. You can configure metric alerts that trigger when values exceed thresholds, enabling proactive monitoring of business health.

Custom events with TrackEvent are designed for discrete occurrences like button clicks or feature usage, not numerical measurements. TrackTrace logs diagnostic messages for debugging, not business metrics. TrackDependency tracks external service calls like databases or APIs. While you could use events to count occurrences and compute rates in queries, this approach is less efficient and lacks the pre-aggregation benefits of metrics.

Q104 

You need to implement authentication for an Azure Storage account using Azure Active Directory. Client applications should use their managed identity. What should you configure?

A) Shared Key authorization with storage account keys

B) Shared Access Signatures (SAS) with stored access policies

C) Azure AD authentication with RBAC roles

D) Anonymous public access with IP restrictions

Answer: C

Explanation:

Azure AD authentication with RBAC roles provides secure, auditable access to Azure Storage using managed identities, eliminating the need for shared keys or signatures while providing fine-grained access control through role assignments.

Azure Storage supports Azure AD integration for Blob and Queue services, enabling applications to authenticate using their managed identity. You assign appropriate RBAC roles to the managed identity, such as Storage Blob Data Contributor for read/write access or Storage Blob Data Reader for read-only access. The application then uses DefaultAzureCredential or ManagedIdentityCredential to obtain access tokens automatically.

The security benefits are substantial. Managed identities eliminate credential management – no keys to rotate, store, or potentially leak. All access attempts are logged in Azure AD sign-in logs with details about which identity accessed what resources and when. This provides comprehensive audit trails for compliance and security investigations. Access can be revoked instantly by removing role assignments without changing any code.

RBAC provides granular permissions at various scopes. You can assign roles at storage account, container, or even individual blob level for precise access control. The built-in roles cover common scenarios, or you can create custom roles with specific permissions. This aligns with principle of least privilege – grant only necessary permissions to each identity.

Integration with Azure SDK is seamless. The BlobServiceClient and QueueClient constructors accept TokenCredential, and the SDK handles token acquisition, caching, and renewal automatically. No code changes are needed when deploying across environments since managed identity works consistently.

Shared keys provide unlimited access to the entire storage account and require secure storage. SAS tokens enable limited access but still require generation and management. Anonymous public access removes authentication entirely, suitable only for truly public content.

Q105 

You are developing an Azure Function that processes orders from a queue. Each order must be processed exactly once, and processing must complete within 5 minutes. What should you configure?

A) Queue trigger with default settings

B) Queue trigger with visibility timeout of 5 minutes

C) Queue trigger with maxDequeueCount of 1

D) Event Grid trigger with retry policy

Answer: B

Explanation:

Queue trigger with visibility timeout of 5 minutes ensures reliable exactly-once processing by keeping messages invisible to other consumers during processing while allowing recovery if processing fails or times out.

The visibility timeout determines how long a message remains invisible after being retrieved from the queue. When your Function retrieves a message, it becomes invisible to other consumers for the visibility timeout duration. If processing completes successfully and the Function deletes the message within this window, the message is permanently removed. If processing fails or exceeds the timeout, the message becomes visible again for retry by the same or another Function instance.

Setting the visibility timeout to 5 minutes aligns with your processing time requirement. The Function has up to 5 minutes to complete processing before the message reappears in the queue. For shorter processing times, you can use shorter timeouts to enable faster retry on failures. The Azure Storage Queue trigger automatically handles message deletion upon successful completion, ensuring exactly-once semantics.

This mechanism prevents message loss and duplicate processing. If a Function instance crashes during processing, the message automatically becomes available after the timeout for another instance to process. If processing completes but deletion fails, the message reappears and is retried. The PeekLock pattern ensures messages aren’t lost even during infrastructure failures.

Default queue trigger settings use a 30-second visibility timeout, too short for 5-minute processing. The maxDequeueCount setting controls when messages move to the poison queue after repeated failures, not the visibility timeout for in-flight processing. It should be set appropriately (default is 5) to handle transient failures while preventing infinite retry loops. Event Grid doesn’t natively integrate with Azure Storage Queues for message retrieval.

Q106

You need to implement content delivery for a video streaming application with geo-replication and low latency globally. What should you use?

A) Azure Blob Storage with LRS replication

B) Azure Blob Storage with GRS replication

C) Azure CDN with Azure Blob Storage origin

D) Azure Traffic Manager with multiple storage accounts

Answer: C

Explanation:

Azure CDN with Azure Blob Storage origin provides optimal global content delivery for video streaming, combining the scalability and cost-effectiveness of Blob Storage with CDN’s edge caching for minimal latency worldwide.

Azure CDN caches content at edge locations across the globe, serving users from the nearest point of presence. When a user requests a video, the CDN checks its local cache. If cached, the content is served immediately with minimal latency. If not cached, the CDN retrieves it from the Blob Storage origin, caches it at the edge, and serves it to the user. Subsequent requests from that region are served from cache, dramatically improving performance.

The architecture provides several advantages. Edge caching reduces origin load since most requests are served from CDN caches rather than storage. This decreases bandwidth costs since egress from CDN edges is typically cheaper than from storage. CDN handles traffic spikes without impacting origin storage performance. Users experience low latency regardless of geographic location since content is served from nearby edge servers.

Azure CDN supports features essential for video streaming including range requests for seeking, HTTP/2 for improved performance, custom caching rules based on file types or paths, and dynamic compression. You can configure cache expiration using Cache-Control headers to balance freshness with performance. CDN also provides HTTPS support with custom domains and DDoS protection.

Blob Storage with LRS (Locally Redundant Storage) or GRS (Geo-Redundant Storage) replication provides data durability but doesn’t improve content delivery latency since storage remains in specific Azure regions. GRS replicates to a secondary region but doesn’t serve content from multiple regions simultaneously. Traffic Manager routes DNS requests but doesn’t cache content, and managing multiple storage accounts adds operational complexity without CDN’s performance benefits.

Q107

You are implementing Azure Key Vault for secrets management. Application code needs to retrieve secrets without using explicit credentials. What authentication method should you implement?

A) Service principal with client secret

B) Certificate-based authentication

C) Managed identity with Key Vault access policy

D) Shared access signature

Answer: C

Explanation:

Managed identity with Key Vault access policy provides passwordless authentication to Key Vault, enabling applications to retrieve secrets securely without storing any credentials in code or configuration files.

Managed identities are Azure AD identities automatically managed by Azure for resources like App Service, Azure Functions, Virtual Machines, and AKS. When you enable managed identity for an Azure resource, Azure creates an identity in Azure AD and automatically manages its lifecycle and credentials. Your application uses this identity to authenticate to Key Vault without any credentials in code.

The implementation involves enabling managed identity on your Azure resource, then granting that identity permissions to access Key Vault secrets through either access policies or RBAC. Access policies specify which operations the identity can perform on secrets, keys, or certificates. For secrets, you typically grant Get and List permissions. Once configured, your application uses Azure SDK with DefaultAzureCredential or ManagedIdentityCredential to authenticate automatically.

The security benefits are substantial. No credentials to store, rotate, or protect from leakage. All access attempts are logged in Azure AD and Key Vault audit logs, providing complete visibility into which identity accessed which secrets. Access can be revoked instantly by removing permissions without code changes. This aligns with zero trust principles where services prove identity through Azure AD rather than shared secrets.

The code is clean and environment-agnostic. DefaultAzureCredential automatically uses managed identity in Azure environments and falls back to local development credentials, enabling the same code to work across development and production without environment-specific credential handling.

Service principals with client secrets require storing the secret somewhere, just moving the problem. Certificate-based authentication requires certificate management and distribution. Shared access signatures are used for delegated access to storage, not Key Vault authentication.

Q108 

You need to implement a long-running background task that processes data from Azure Blob Storage. The task runs for 2 hours and must not timeout. What should you use?

A) Azure Function on Consumption plan

B) Azure WebJob with continuous execution

C) Azure Logic App with long-running action

D) Azure Container Instance with restart policy

Answer: B

Explanation:

Azure WebJob with continuous execution is designed for long-running background tasks without timeout limitations, providing reliable execution within the App Service environment with full control over execution duration and resource usage.

WebJobs are background tasks that run in the same context as your App Service web app, sharing the same App Service Plan resources. Continuous WebJobs start automatically when created and run indefinitely in an infinite loop or until manually stopped. There are no execution time limits, making them ideal for 2-hour processing tasks. The WebJob can implement its own logic for monitoring blob storage, processing files, and managing state.

The key advantages include no timeout constraints, full access to the file system and environment of the App Service, support for multiple programming languages (C#, Java, Node.js, Python), and automatic logging to App Service logs. WebJobs scale with your App Service Plan – on multi-instance plans, continuous WebJobs typically run on a single instance, but you can configure multiple instances if needed.

WebJobs integrate seamlessly with the WebJobs SDK which provides triggers for storage queues, blobs, and Service Bus. The SDK handles message processing, poison message handling, and graceful shutdown. For blob processing, the SDK monitors blob containers and triggers your function when new blobs appear, maintaining state to track processed blobs.

Azure Functions on Consumption plan have execution timeouts – 10 minutes maximum, insufficient for 2-hour tasks. While Durable Functions can orchestrate longer workflows, direct 2-hour processing isn’t supported. Logic Apps support long-running actions but are more expensive for compute-intensive processing. Container Instances work but require more infrastructure management than WebJobs which inherit App Service’s deployment, scaling, and monitoring capabilities.

Q109 

You are implementing Azure Service Bus topics with multiple subscriptions. Each subscription should receive only messages matching specific criteria. What should you configure?

A) Message sessions with session IDs

B) Subscription filters with SQL filter expressions

C) Dead-letter queues per subscription

D) Duplicate detection on the topic

Answer: B

Explanation:

Subscription filters with SQL filter expressions enable message routing based on message properties and content, ensuring each subscription receives only relevant messages without requiring multiple topics or client-side filtering.

Service Bus supports three types of filters on subscriptions. SQL filters use SQL-92 syntax to evaluate message properties, correlation filters match specific property values efficiently, and the default filter (true filter) accepts all messages. SQL filters provide the most flexibility, supporting comparisons, logical operators, and functions to evaluate complex criteria based on custom properties, system properties, or message metadata.

The implementation involves setting filter rules when creating or updating subscriptions. For example, a subscription for high-priority orders might use the filter Priority = ‘High’, while a subscription for specific regions uses Region IN (‘US’, ‘EU’). When you publish a message to the topic, Service Bus evaluates all subscription filters and delivers copies to subscriptions whose filters match. Publishers send messages once to the topic, and Service Bus handles routing to appropriate subscriptions.

This architecture provides powerful messaging patterns. You can implement content-based routing where different consumers process different message types, priority-based processing where high-priority messages go to dedicated subscriptions with more consumers, and regional processing where messages route to subscriptions serving specific geographic areas. Filters execute server-side, eliminating unnecessary message transmission to consumers who would discard them client-side.

Filters support complex expressions including EXISTS for checking property presence, NOT for negation, and arithmetic operators. You can create multiple filter rules per subscription with different actions, enabling sophisticated routing logic. Changes to filters don’t require publisher modifications.

Message sessions provide ordered processing and stateful workflows but don’t filter messages. Dead-letter queues handle processing failures, not routing. Duplicate detection prevents reprocessing identical messages but doesn’t route based on content.

Q110 

You need to implement distributed transactions across multiple Azure SQL databases and Azure Cosmos DB. What should you use?

A) Two-phase commit protocol

B) Azure Cosmos DB stored procedures

C) Saga pattern with compensating transactions

D) Azure SQL Database elastic transactions

Answer: C

Explanation:

Saga pattern with compensating transactions is the appropriate approach for distributed transactions across heterogeneous data stores like Azure SQL Database and Cosmos DB, which don’t support traditional distributed transactions.

The saga pattern breaks a distributed transaction into a series of local transactions, where each service performs its local transaction and publishes an event or message to trigger the next step. If any step fails, the saga executes compensating transactions that undo the changes made by previous steps, maintaining consistency without requiring locks across services.

There are two saga coordination approaches. Choreography uses events where each service listens for events and performs its action, then publishes its own event. Orchestration uses a central coordinator that directs participants when to execute local transactions. Orchestration provides better control and monitoring, making it easier to track saga progress and handle failures.

Implementation involves designing compensating transactions for each operation. For example, if creating an order involves reserving inventory in SQL Database and recording the order in Cosmos DB, the compensating transactions would release the inventory reservation and delete the order record. Each step must be idempotent since retries may occur during failure recovery.

Azure Durable Functions provides excellent support for saga orchestration with built-in reliability, state management, and error handling. The orchestrator function coordinates the saga steps, handling failures and triggering compensations as needed. The framework ensures orchestration continues even during infrastructure failures.

Two-phase commit requires distributed transaction coordinator support, which Azure SQL Database and Cosmos DB don’t provide across service boundaries. Cosmos DB stored procedures only affect documents within a single partition. SQL Database elastic transactions only work across SQL databases in the same SQL Database logical server, not with Cosmos DB or across different services.

Q111

You are implementing Azure API Management. The backend API returns detailed error messages that should not be exposed to external clients. What should you configure?

A) Inbound policy with message transformation

B) Outbound policy with error response transformation

C) Backend policy with error handling

D) On-error policy with custom error response

Answer: D

Explanation:

On-error policy with custom error response enables you to intercept errors from backend services and transform them into standardized, security-conscious responses that hide internal implementation details while providing useful information to clients.

The on-error policy section executes when errors occur during request processing, whether from backend failures, policy errors, or other issues. Within this section, you can inspect the error context, log details for internal monitoring, and return custom error responses with appropriate status codes and sanitized messages. This prevents leaking sensitive information like database connection strings, internal server names, or stack traces that might appear in backend error messages.

Implementation involves adding an on-error section to your policy definition with return-response elements that specify status codes and response bodies. You can categorize errors by status code ranges or specific conditions, providing different responses for different error types. For example, 500-level errors might return a generic “Internal Server Error” message while 400-level errors return more specific validation messages.

The policy has access to context variables including the original error message, status code, and headers, enabling logging to Application Insights or other monitoring systems while returning sanitized responses to clients. You can construct JSON or XML error responses following consistent schemas, improving client error handling predictability.

Error handling can include retry logic, fallback responses, or circuit breaker patterns. You can also add correlation IDs to error responses that match internal logs, helping support teams troubleshoot issues without exposing details to users.

Inbound policies execute before backend calls, so they can’t transform backend error responses. Outbound policies execute on successful responses, not errors. Backend policies can catch errors from backend calls but the on-error section provides more comprehensive error handling including policy execution errors and system failures.

Q112

You need to implement authentication for a mobile application that calls Azure Functions. Users should authenticate with social identity providers. What should you implement?

A) Azure AD B2C with social identity providers

B) Azure AD with guest user invitations

C) Custom authentication with database-stored credentials

D) API key authentication with Azure API Management

Answer: A

Explanation:

Azure AD B2C with social identity providers offers comprehensive consumer identity management, enabling users to sign in with existing social accounts while providing secure token-based authentication for your Azure Functions backend.

Azure AD B2C is specifically designed for customer-facing applications, supporting sign-up and sign-in with social identity providers including Facebook, Google, Microsoft Account, Twitter, and others. It also supports local accounts with email or username, phone sign-in, and custom identity providers through OpenID Connect or SAML. The service handles the complexity of OAuth flows, token issuance, and user profile management.

User flows and custom policies define the sign-up and sign-in experience. User flows provide pre-built experiences for common scenarios, while custom policies enable complex scenarios with conditional logic and API integrations. B2C issues JWT tokens after authentication containing user claims like email, display name, and social provider identifiers. Your mobile app includes these tokens in requests to Azure Functions.

Azure Functions validates B2C tokens using JWT validation middleware. You configure the function to accept tokens from your B2C tenant, validating the signature, issuer, and audience. The token claims provide user identity, enabling personalized functionality and authorization decisions. B2C tokens are short-lived and automatically refreshed by the mobile SDK, maintaining security without frequent user re-authentication.

B2C provides additional features including multi-factor authentication, self-service password reset, profile editing, and customizable UI matching your brand. The service scales automatically to handle millions of users and integrates with Azure Monitor for analytics and auditing.

Azure AD supports social providers through guest users but is designed for organizational scenarios, not consumer apps. Custom authentication requires implementing OAuth flows, token management, and password security yourself. API keys don’t provide user identity or modern authentication features.

Q113

You are implementing Azure Cosmos DB with the Core (SQL) API. You need to ensure all reads return the most recent committed write from any region. What consistency level should you choose?

A) Strong consistency

B) Bounded staleness consistency

C) Session consistency

D) Consistent prefix consistency

Answer: A

Explanation:

Strong consistency ensures all reads return the most recent committed write globally across all regions, providing linearizability where operations appear to execute in a single, global order that respects real-time ordering.

Strong consistency offers the highest consistency guarantee available in Cosmos DB. When a write is committed, all subsequent reads from any region immediately see that write. There is no possibility of reading stale data, and all clients see a consistent view of the data at any point in time. This matches the behavior of traditional single-region databases with serializable isolation.

The mechanism works through quorum-based replication. Writes must be acknowledged by a quorum of replicas before returning success, ensuring the write is durable and visible. Reads also require quorum acknowledgment, guaranteeing they see all committed writes. This protocol ensures linearizability but introduces latency equal to the round-trip time to the farthest replica in the replication set.

Strong consistency is appropriate for scenarios requiring absolute data consistency such as financial transactions, inventory systems where overselling must be prevented, and applications where reading stale data could cause incorrect business decisions. The trade-off is higher latency for operations since they require cross-region coordination.

The latency impact is significant for globally distributed applications. Read and write operations may take hundreds of milliseconds when regions are geographically distant. Strong consistency also limits availability during network partitions since quorum requirements prevent operations when regions are unreachable.

Bounded staleness guarantees staleness within defined bounds but allows reading slightly stale data. Session consistency only guarantees consistency within a single session, not across clients. Consistent prefix prevents out-of-order reads but allows arbitrary staleness. Only strong consistency guarantees immediate global visibility of all writes.

Q114

You need to implement a solution that archives Azure App Service logs to Azure Storage automatically with retention management. What should you configure?

A) App Service diagnostic settings with Log Analytics workspace

B) App Service diagnostic settings with Storage Account

C) Azure Monitor Logs with export to storage

D) Azure Logic App to periodically copy logs

Answer: B

Explanation:

App Service diagnostic settings with Storage Account provides native log archival capabilities with automatic retention management, enabling long-term storage of application and platform logs without custom code or orchestration.

Diagnostic settings in App Service enable you to route logs to multiple destinations including Storage Accounts, Log Analytics workspaces, and Event Hubs. For archival purposes, Storage Account is optimal due to low cost and built-in retention policies. You configure which log categories to capture such as HTTP logs, application logs, deployment logs, and platform logs.

Once configured, App Service automatically streams logs to the specified storage account, organizing them by log type and date in blob containers. The logs are written in JSON format, making them easily queryable and analyzable. Storage accounts provide unlimited retention at minimal cost, ideal for compliance requirements mandating long-term log retention.

Storage lifecycle management policies enable automatic retention management without custom code. You can configure policies to move logs from hot to cool or archive tier after specified periods, then delete after retention periods expire. For example, logs might transition to cool tier after 30 days, archive tier after 90 days, and delete after 7 years, all managed automatically.

The integration requires minimal configuration – select the storage account and log categories in diagnostic settings. App Service handles authentication using managed identity, eliminating credential management. Logs are immediately available in storage, and you can query them using tools like Azure Storage Explorer or programmatically using Azure SDKs.

Log Analytics workspace is designed for querying and analysis, not long-term archival. While logs can be exported to storage from Log Analytics, this adds complexity. Logic Apps require custom implementation, ongoing maintenance, and compute costs for execution, unnecessary when native capabilities exist.

Q115

You are implementing Azure Event Grid for event-driven architecture. Events must be delivered reliably with retry and dead-lettering. What should you configure?

A) Event Grid system topics with default retry policy

B) Event Grid custom topics with retry policy and dead-letter destination

C) Event Hub with capture to storage

D) Service Bus topics with duplicate detection

Answer: B

Explanation:

Event Grid custom topics with retry policy and dead-letter destination provide comprehensive reliable messaging with configurable retry behavior and automatic handling of events that fail delivery after all retries.

Event Grid retry policies control how aggressively Event Grid attempts delivery when subscribers are unavailable or return errors. You configure maximum retry attempts and event time-to-live (TTL). Event Grid uses exponential backoff between retries, waiting progressively longer between attempts to avoid overwhelming recovering subscribers. The default policy retries for 24 hours, but you can customize this based on your requirements.

Dead-letter destinations are storage accounts where Event Grid sends events that fail all delivery attempts. This prevents event loss while enabling offline analysis of delivery failures. Each dead-lettered event includes metadata about why delivery failed, such as HTTP status codes and error details. You can process dead-lettered events manually, automatically through monitoring functions, or replay them after fixing subscriber issues.

The configuration involves specifying maximum delivery attempts, event TTL, and a dead-letter blob container when creating the event subscription. Event Grid automatically manages the retry schedule and dead-lettering. You can configure different retry policies for different event subscriptions based on subscriber requirements and SLAs.

Event Grid provides delivery guarantees through at-least-once semantics. Events may be delivered multiple times during retries, so subscribers should implement idempotent processing. The retry mechanism ensures temporary failures like network issues or brief subscriber downtime don’t result in event loss.

System topics have retry policies but limited configurability compared to custom topics. Event Hubs focus on high-throughput streaming rather than reliable delivery to specific endpoints. Service Bus provides reliable messaging but uses a different model with queues and subscriptions rather than event-driven patterns.

Q116

You need to implement real-time communication between a server and web clients. Clients should receive updates pushed from the server. What should you use?

A) Azure SignalR Service

B) Azure Event Grid with webhook subscriptions

C) Azure Service Bus topics

D) HTTP polling with Azure API Management

Answer: A

Explanation:

Azure SignalR Service provides managed infrastructure for real-time, bidirectional communication between servers and web clients, supporting WebSocket-based push notifications with automatic connection management and scaling.

SignalR abstracts the complexity of real-time communication by automatically selecting the best transport protocol based on client capabilities – WebSocket, Server-Sent Events, or long polling. WebSocket provides the most efficient bidirectional communication with minimal latency, while fallback options ensure compatibility with restrictive network environments and older browsers.

The service handles connection management, scaling, and reliability. When clients connect, SignalR Service maintains persistent connections even as your backend services scale or restart. The service can handle hundreds of thousands of concurrent connections, automatically scaling to meet demand. Connection state is managed by the service, eliminating the need for sticky sessions or connection tracking in your application.

Implementation involves integrating the SignalR SDK in your backend (ASP.NET Core, Azure Functions, or other platforms) and client applications (JavaScript, .NET, Java, etc.). Your server sends messages to SignalR Service specifying target clients by user ID, group, or connection ID. SignalR Service delivers messages to connected clients in real-time. The service supports broadcasting to all clients, multicast to groups, and unicast to specific users.

SignalR enables scenarios like live dashboards, collaborative applications, real-time notifications, and chat applications. The managed service eliminates infrastructure concerns like connection scaling, protocol negotiation, and failover handling that plague self-hosted SignalR implementations.

Event Grid with webhooks requires clients to expose public endpoints, unsuitable for web browsers. Service Bus is message queuing, not real-time push. HTTP polling creates excessive overhead, wastes resources, and introduces latency between updates and client awareness.

Q117

You are implementing Azure Functions with dependency injection. You need to register a service with a scoped lifetime. What should you do?

A) Use services.AddSingleton to register the service

B) Use services.AddScoped to register the service

C) Use services.AddTransient to register the service

D) Manually create instances in each function

Answer: B

Explanation:

Using services.AddScoped registers the service with scoped lifetime, creating one instance per function execution, which is the appropriate pattern for services that should be isolated per request while being reused across the execution scope.

Azure Functions supports dependency injection through the IServiceCollection configuration in the Startup class. Scoped lifetime means the service container creates one instance when first requested during a function execution and reuses that instance throughout the execution. When the function completes, the instance is disposed. For the next function invocation, a new instance is created.

Scoped services are ideal for services that maintain state during request processing but should be isolated between requests. Examples include Entity Framework DbContext, HTTP clients with request-specific configuration, and services that aggregate data during processing. The scoped lifetime ensures each function execution gets a clean instance without cross-contamination from previous executions.

The implementation involves creating a Startup class decorated with FunctionsStartup attribute, overriding Configure method, and registering services with appropriate lifetimes. In your function constructor, you declare dependencies, and the runtime injects instances automatically. The dependency injection container handles instance creation, disposal, and lifetime management.

Scoped lifetime in Azure Functions maps to function execution rather than HTTP request scope since Functions can be triggered by various sources including timers, queues, and events. This ensures consistent behavior across trigger types while maintaining isolation guarantees.

Singleton services create one instance shared across all function executions, causing potential state corruption and threading issues if not thread-safe. Transient services create new instances every time requested, even within a single execution, potentially causing issues with services that should maintain state during processing. Manual instance creation bypasses dependency injection benefits like testability and automatic disposal.

Q118

You need to implement Azure Cosmos DB change feed processing to react to document changes in real-time. What should you use?

A) Azure Functions with Cosmos DB trigger

B) Polling with continuation tokens

C) Cosmos DB stored procedures

D) Azure Logic App with recurrence trigger

Answer: A

Explanation:

Azure Functions with Cosmos DB trigger provides the most efficient and scalable solution for processing change feed events in real-time, with automatic checkpoint management, parallel processing, and seamless scaling.

The Cosmos DB trigger for Azure Functions uses the change feed processor library internally, monitoring the container for changes and invoking your function whenever documents are inserted or updated. The trigger automatically handles lease management, checkpointing, and parallel processing across multiple function instances without requiring manual coordination.

The trigger maintains leases in a separate lease container, tracking which portions of the change feed have been processed. When changes occur, the trigger distributes processing across available function instances, enabling horizontal scaling. If processing fails, changes are retried automatically. The trigger supports ordered processing within each partition key while processing different partitions in parallel for optimal throughput.

Configuration is straightforward – specify the monitored container, lease container, and optional settings like batch size and feed poll delay. The function receives batches of changed documents as input. You can access document metadata including the timestamp and operation type. The trigger handles all infrastructure concerns including connection management, retry logic, and state persistence.

Use cases include maintaining materialized views, synchronizing data to other data stores, sending notifications when documents change, aggregating data for analytics, and implementing event-sourcing patterns. The change feed captures all inserts and updates but not deletes, so deletion handling requires soft-delete patterns with update operations.

Manual polling with continuation tokens requires implementing lease management, checkpointing, and distributed coordination yourself, creating significant complexity. Stored procedures execute within Cosmos DB and can’t trigger external actions. Logic Apps with recurrence introduce polling delays and don’t provide the real-time responsiveness or scalability of the change feed trigger.

Q119

You are implementing Azure API Management. Backend APIs are deployed in multiple regions. API Management should route requests to the nearest healthy backend. What should you configure?

A) Backend pool with load balancing

B) Multiple backends with set-backend-service policy and priority

C) Azure Traffic Manager in front of API Management

D) Azure Front Door with API Management backend

Answer: A

Explanation:

Backend pool with load balancing provides native multi-region backend support in API Management, enabling geographic routing, health probing, and automatic failover without requiring external load balancers or traffic managers.

Backend pools allow you to define multiple backend endpoints representing the same logical API deployed in different regions. API Management can load balance requests across these backends using various algorithms including round-robin, weighted, and priority-based routing. The service performs health checks on backend endpoints, automatically removing unhealthy backends from rotation and restoring them when health recovers.

For geographic routing, you can configure backend pools with priority settings where backends in the same region as the API Management gateway receive higher priority, ensuring local backends are preferred when healthy. This minimizes latency by keeping traffic within the same region. When local backends fail, traffic automatically fails over to backends in other regions.

Configuration involves creating a backend pool, adding backend URLs for each regional deployment, and configuring health probe settings including probe interval, timeout, and healthy/unhealthy thresholds. In your API policies, use the set-backend-service policy specifying the backend pool instead of a single backend URL. API Management handles the routing logic based on backend health and load balancing configuration.

Backend pools integrate with circuit breaker patterns. You can configure failure thresholds that temporarily remove problematic backends from rotation, preventing cascading failures. Metrics and diagnostics provide visibility into backend health, response times, and routing decisions, enabling operational monitoring and troubleshooting.

Multiple backends with set-backend-service can achieve some routing but requires complex policy logic to implement health checking and failover. Traffic Manager and Front Door add external dependencies and complexity when API Management provides native capabilities. These external services also introduce additional latency and cost.

Q120

You need to implement Azure Blob Storage versioning to maintain history of blob modifications with automatic cleanup of old versions. What should you configure?

A) Blob snapshots with manual management

B) Blob versioning with lifecycle management policies

C) Soft delete for blobs

D) Point-in-time restore for containers

Answer: B

Explanation:

Blob versioning with lifecycle management policies provides automatic version history tracking with configurable retention, enabling recovery from accidental modifications or deletions while preventing indefinite version accumulation.

Blob versioning automatically creates a new version whenever a blob is modified or deleted. Each version is immutable and assigned a unique version ID based on the modification timestamp. The latest version remains accessible through the standard blob URL, while previous versions are accessible via version-specific URLs. This enables recovery from accidental overwrites or unwanted modifications without impacting application access patterns.

Lifecycle management policies integrate with versioning to automate version cleanup. You can define rules that delete versions older than a specified age, transition old versions to cooler storage tiers, or retain only a certain number of versions. For example, a policy might keep all versions for 30 days, then delete versions older than 90 days, automatically managing storage costs while maintaining useful history.

The combination provides comprehensive data protection without manual intervention. Versioning ensures no data loss from modifications while lifecycle policies prevent unlimited version accumulation. You can customize retention based on compliance requirements and cost constraints. Policies can also handle deletion markers (created when blobs are deleted with versioning enabled) separately from regular versions.

Versioning differs from snapshots in important ways. Versions are created automatically on every modification without explicit snapshot commands. Versions survive blob deletion while snapshots are deleted with the base blob. Versioning provides linear history of all changes while snapshots represent point-in-time copies requiring manual creation.

Soft delete enables recovery of deleted blobs and versions for a retention period but doesn’t track modification history. Point-in-time restore recovers containers to a previous state but doesn’t provide granular version access for individual blobs. Manual snapshot management requires application changes and lacks automatic cleanup capabilities.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!