Microsoft AZ-204 Developing Solutions for Azure Exam Dumps and Practice Test Questions Set8 Q141-160

Visit here for our full Microsoft AZ-204 exam dumps and practice test questions.

Q141

You are implementing Azure Functions with Durable Functions for workflow orchestration. You need to wait for external events before continuing. What should you use?

A) WaitForExternalEvent in orchestrator function

B) Timer trigger with polling

C) Queue input binding

D) HTTP trigger with manual state management

Answer: A

Explanation:

WaitForExternalEvent in orchestrator function provides native support for asynchronous external interactions in durable workflows, enabling orchestrations to pause efficiently until external processes complete and send events.

Durable Functions orchestrations can wait for external events using the WaitForExternalEvent method, which suspends the orchestration without consuming resources until an event with the specified name is raised. External systems or processes can raise events using the DurableClient.RaiseEventAsync method, providing event name and payload. When the event is raised, the orchestration resumes from where it paused.

This pattern enables human-in-the-loop workflows where orchestrations wait for approvals, external service integration where orchestrations pause until webhooks arrive, or any scenario requiring asynchronous coordination. The orchestration maintains all state automatically, resuming with full context when the event arrives.

The implementation involves calling context.WaitForExternalEvent<T> with an event name and optionally a timeout. The orchestration pauses at this point until an event is raised or timeout expires. Multiple orchestrations can wait for different events simultaneously. Events include typed payloads providing data from external systems to the resumed orchestration.

Resource efficiency is a key benefit. While waiting, the orchestration consumes no compute resources, only minimal storage for persisting state. This enables long-running workflows waiting hours or days for external events without continuous polling or resource consumption. When events arrive, orchestrations resume within seconds.

The pattern supports complex scenarios like waiting for multiple events using WaitForAnyExternalEvent or WaitForAllExternalEvents, handling timeouts gracefully, and correlating events to orchestration instances using instance IDs.

Timer triggers with polling waste resources continuously checking for events. Queue bindings require external systems to use Azure queues rather than their natural notification mechanisms. HTTP triggers with manual state management require implementing the entire state machine persistence and resumption logic that Durable Functions provides automatically.

Q142

You need to implement Azure Blob Storage lifecycle policies that apply different rules to blobs based on their prefix. What should you configure?

A) Single lifecycle policy with prefix filters

B) Separate storage accounts for each prefix

C) Manual scripts to move blobs between tiers

D) Azure Automation runbooks with scheduled execution

Answer: A

Explanation:

Single lifecycle policy with prefix filters enables efficient rule-based data management within one storage account, applying different retention and tiering rules to blobs based on their container and prefix paths.

Lifecycle management policies support filters based on blob name prefixes, enabling different rules for different blob paths within the same storage account. For example, you might configure logs/ prefix blobs to move to cool tier after 30 days and delete after 90 days, while backups/ prefix blobs transition to archive after 60 days and retain for 7 years.

The policy is defined in JSON with rules containing filters and actions. Filters specify blob types (blockBlob, appendBlob), blob index tags, and prefix matches like container/path/. Actions define transitions (tierToCool, tierToArchive) or deletion with conditions based on days since creation, modification, or last access. Multiple rules can coexist in one policy, each applying to matching blobs.

This approach provides centralized management where all lifecycle rules are defined in one place, making governance and compliance easier. Policy execution is automatic – the platform evaluates rules daily and performs actions without custom code. Changes to policies take effect on the next evaluation cycle. Policies are version controlled and can be deployed through ARM templates or Terraform.

The declarative nature ensures consistency and reduces operational overhead compared to imperative scripts. Actions are atomic and idempotent, safely handling partial failures. The system tracks which blobs have been processed, avoiding redundant operations.

Separate storage accounts per prefix create management overhead and complicate access control, networking, and cost tracking. Manual scripts and Automation runbooks require ongoing maintenance, error handling, and monitoring, all provided automatically by lifecycle policies. They also consume compute resources while lifecycle policy execution is free beyond minimal storage transaction costs.

Q143

You are implementing Azure API Management with backend services across multiple Azure regions. How should you ensure lowest latency for global users?

A) Single API Management instance with Azure Traffic Manager

B) Multi-region API Management deployment with backend pools

C) Azure Front Door with API Management backend

D) One API Management instance per region with separate APIs

Answer: B

Explanation:

Multi-region API Management deployment with backend pools provides optimal latency by placing API Management gateways near users while intelligently routing to the nearest healthy backend, all within API Management’s native capabilities.

API Management supports multi-region deployment where you add gateway regions to a single API Management instance. Each region deploys a gateway handling API requests locally, reducing latency for users in that region. Backend pools enable configuring multiple backend endpoints representing regional deployments of your services, with geographic routing preferring backends in the same region as the gateway.

The architecture places API Management gateways globally while maintaining centralized configuration and monitoring. Users connect to the nearest gateway via Azure Traffic Manager or DNS-based routing. The gateway applies policies, authentication, and rate limiting locally, then forwards to backends. Backend pools route requests to backends in the same region when available, falling back to other regions during failures or maintenance.

This configuration provides global low latency for both API requests and backend communication. Users experience fast API gateway connections, and backends receive requests from nearby gateways. Policy execution occurs at the edge, reducing round trips. Backend failover is automatic based on health probes.

Deployment and management remain centralized. Configuration changes propagate to all regions automatically. Monitoring and analytics aggregate globally. You pay only for the API Management instance and additional regional gateway capacity, without managing separate instances per region.

Single instance with Traffic Manager doesn’t address backend latency and requires API Management to route requests across regions. Front Door adds another service when API Management provides native multi-region support. Separate instances per region create management complexity with separate configurations, monitoring, and lack of unified analytics.

Multi-region API Management combined with backend pools provides the complete global distribution solution for APIs within a single managed service.

Q144

You need to implement Azure Key Vault access for Azure Functions using managed identity. The solution must work in both Azure and local development. What should you use?

A) ManagedIdentityCredential only

B) DefaultAzureCredential

C) ClientSecretCredential with app settings

D) Interactive browser credential

Answer: B

Explanation:

DefaultAzureCredential provides automatic credential discovery that works seamlessly across Azure and local development environments without code changes, using managed identity in Azure and developer credentials locally.

DefaultAzureCredential attempts multiple authentication methods in sequence: environment variables, managed identity, Visual Studio, Azure CLI, Azure PowerShell, and interactive browser. In Azure, it discovers and uses the Function App’s managed identity automatically. In local development, it uses credentials from tools like Azure CLI or Visual Studio that developers already use for Azure access.

This approach enables the same code to work in all environments without configuration changes or environment-specific branches. Developers don’t need to store credentials locally or configure separate authentication for development. The production function uses managed identity without any credentials in configuration. The pattern follows security best practices by eliminating credential management.

Implementation is simple – create SecretClient or other Key Vault clients using new DefaultAzureCredential(). The SDK handles authentication automatically. In Azure, ensure managed identity is enabled and granted Key Vault permissions. Locally, ensure developers are signed in to Azure CLI or Visual Studio with accounts that have Key Vault access.

The credential chain provides fallback mechanisms. If one authentication method fails, the next is attempted. This resilience ensures authentication succeeds in various environments including continuous integration systems (using environment variables), Azure services (using managed identity), and developer workstations (using local credentials).

ManagedIdentityCredential only works in Azure, failing in local development. ClientSecretCredential requires storing credentials somewhere, just moving the problem. Interactive browser credential requires user interaction, unsuitable for automated functions. Only DefaultAzureCredential provides the versatility needed for all environments.

Q145

You are implementing Azure Cosmos DB with multiple consistency levels for different operations. Can you use different consistency levels in the same application?

A) No, consistency level is set at account level only

B) Yes, by configuring per-request consistency level weaker than account level

C) Yes, by using multiple Cosmos DB accounts

D) No, all requests must use account consistency level

Answer: B

Explanation:

Yes, by configuring per-request consistency level weaker than account level enables flexibility to use stronger consistency for critical operations while relaxing consistency for less critical reads to improve performance and reduce latency.

Cosmos DB account is configured with a default consistency level, which can be Strong, Bounded Staleness, Session, Consistent Prefix, or Eventual. This default applies to all requests unless overridden per-request. The SDK allows specifying consistency level for individual read operations, but only to levels weaker than the account default – you cannot request stronger consistency than configured at account level.

This design provides flexibility while maintaining guarantees. If your account uses Session consistency (appropriate for most scenarios), individual reads can opt down to Consistent Prefix or Eventual for better performance on non-critical reads. Critical reads can use the default Session consistency. You cannot request Strong consistency per-request if the account uses weaker consistency.

The pattern enables optimizing cost and performance for different access patterns. Background analytics queries might use Eventual consistency for best performance, while user-facing reads use Session consistency for read-your-own-writes guarantees. The application code makes these decisions based on operation criticality.

Implementation involves setting ConsistencyLevel property on ItemRequestOptions or FeedOptions when making read requests. Write operations always use the account consistency level since write durability is critical. The flexibility applies only to reads where staleness tolerance varies.

Multiple Cosmos DB accounts with different consistency levels work but create management overhead, cost implications, and data synchronization challenges. It’s unnecessary when per-request consistency provides the needed flexibility. The account-level setting ensures minimum consistency guarantees while allowing relaxation per-operation for performance optimization.

Q146

You need to implement Azure Service Bus dead-letter queue monitoring with alerts when messages arrive. What should you configure?

A) Azure Monitor metric alert on dead-letter message count

B) Logic App with recurrence trigger polling the queue

C) Service Bus trigger function monitoring dead-letter queue

D) Event Grid subscription to Service Bus events

Answer: A

Explanation:

Azure Monitor metric alert on dead-letter message count provides real-time alerting on dead-letter queue activity with minimal configuration, enabling proactive response to processing failures without polling or custom code.

Service Bus exposes metrics including active message count per queue and dead-letter message count. Azure Monitor metric alerts continuously evaluate these metrics and trigger when configured thresholds are exceeded. For dead-letter monitoring, create an alert that fires when dead-letter message count exceeds zero or a specific threshold, indicating processing failures requiring investigation.

The alert configuration specifies the Service Bus namespace and queue, the dead-letter message count metric, threshold value, evaluation frequency, and action groups for notifications. When the condition is met, Azure Monitor invokes action groups sending emails, SMS, push notifications, or triggering webhooks, Azure Functions, or ITSM integrations for automated response.

This approach provides immediate notification when issues occur without continuous polling. Alerts are cost-effective, only triggering actions when thresholds are exceeded. The evaluation runs automatically without maintaining infrastructure. Multiple alerts can monitor different queues or thresholds, creating tiered alerting for warning versus critical scenarios.

Advanced configurations include dynamic thresholds using machine learning to detect anomalous dead-letter rates rather than fixed thresholds, combining multiple conditions (e.g., high dead-letter count AND increasing trend), and correlation with other metrics like processing failures or CPU usage to identify root causes.

Logic Apps with recurrence polling waste resources continuously checking for messages. Service Bus trigger functions monitoring dead-letter queues work but process dead-lettered messages rather than alerting. Event Grid subscriptions to Service Bus are for message lifecycle events, not metric-based monitoring. Only Azure Monitor provides efficient metric-based alerting without polling or message processing.

Q147

You are implementing Azure Functions with high availability requirements. Functions must continue operating during region failures. What should you configure?

A) Single Function App with Basic tier App Service Plan

B) Function App with zone redundancy enabled

C) Multiple Function Apps deployed to different regions with Traffic Manager

D) Premium plan with VNet integration

Answer: C

Explanation:

Multiple Function Apps deployed to different regions with Traffic Manager provides true regional redundancy, ensuring function availability continues if an entire Azure region becomes unavailable, meeting high availability requirements for critical applications.

The architecture deploys identical Function Apps to multiple Azure regions (e.g., East US and West US). Each Function App is an independent deployment with its own compute, storage, and configuration. Azure Traffic Manager provides DNS-based routing to healthy endpoints, directing requests to available regions. When one region fails, Traffic Manager detects the failure through health probes and routes traffic to remaining healthy regions.

Implementation considerations include deploying function code to all regions using CI/CD pipelines, synchronizing configuration through Infrastructure as Code, ensuring backend dependencies (databases, storage) are also multi-region, and configuring Traffic Manager with appropriate routing method (priority for active-passive, performance for active-active based on latency).

The solution handles regional outages without manual intervention. Traffic Manager health probes continuously verify endpoint availability. When probes fail, Traffic Manager stops routing traffic to unhealthy regions within minutes. Applications using Traffic Manager DNS name automatically fail over without code changes or user intervention.

For stateful functions or those depending on storage, implement data replication between regions using Cosmos DB multi-region, SQL Database geo-replication, or Storage geo-redundant storage. Functions should be designed to handle eventual consistency and potential data conflicts during failover.

Single Function Apps lack regional redundancy. Zone redundancy protects against datacenter failures within a region but not regional outages. Premium plan with VNet doesn’t provide multi-region redundancy. Only deploying to multiple regions with intelligent routing provides the availability required to withstand complete regional failures.

Q148

You need to implement Azure Blob Storage encryption with customer-managed keys. What services should you use?

A) Azure Key Vault for key storage and Storage account encryption settings

B) Azure Information Protection

C) Application-level encryption before upload

D) Storage account keys for encryption

Answer: A

Explanation:

Azure Key Vault for key storage and Storage account encryption settings provides customer-managed key encryption where you control the encryption keys used to protect data at rest, meeting compliance requirements for key management control.

Customer-managed keys (CMK) enable you to use your own encryption keys stored in Azure Key Vault instead of Microsoft-managed keys. Storage account data is encrypted at rest using your key from Key Vault. You maintain control over key lifecycle including rotation, access control, and auditing. Keys never leave Key Vault – Storage service authenticates to Key Vault and uses the key for encryption/decryption operations.

The configuration involves creating keys in Key Vault (or importing your own), enabling managed identity on the storage account, granting the identity cryptographic permissions on Key Vault, and configuring CMK encryption on the storage account specifying the Key Vault and key. Once configured, all new and existing data is encrypted with your key.

CMK provides several compliance benefits. You control key access through Key Vault access policies or RBAC, providing governance over who can decrypt data. Disabling or deleting keys renders data inaccessible, providing a crypto-shredding data deletion method. All key access is logged in Key Vault audit logs, providing proof of key usage for compliance audits.

Key rotation is managed through Key Vault key versioning. When you create a new key version, update storage account configuration to use the new version. Storage automatically re-encrypts data encryption keys with the new key. You can automate rotation using Key Vault’s automated rotation features for supported key types.

Azure Information Protection is for document classification and protection, not storage-level encryption. Application-level encryption provides additional security layer but doesn’t replace storage encryption. Storage account keys authenticate access but aren’t encryption keys. Only Key Vault-based CMK provides the customer-controlled encryption key management required for compliance scenarios.

Q149

You are implementing Azure API Management policies. You need to cache responses differently based on query string parameters. What policy should you use?

A) cache-lookup and cache-store with default settings

B) cache-lookup-value and cache-store-value

C) cache-lookup and cache-store with vary-by-query-parameter

D) response-cache with duration setting

Answer: C

Explanation:

cache-lookup and cache-store with vary-by-query-parameter enables query-string-aware caching where responses are cached separately for different query parameter values, ensuring users receive cached responses matching their specific query parameters.

API Management caching stores backend responses in a cache, serving subsequent identical requests from cache without calling the backend. The vary-by-query-parameter setting specifies which query parameters affect cache key generation. Requests with different values for these parameters get separate cache entries, while parameters not listed are ignored for caching purposes.

The configuration involves adding cache-lookup in the inbound policy section and cache-store in the outbound section. The vary-by-query-parameter attribute lists query parameters to include in cache key, for example “category,page” caches separately for different category and page values but ignores other parameters. This optimizes cache effectiveness while preventing inappropriate cache sharing.

Use cases include product listings where category parameter should trigger separate cache entries but tracking parameters like utm_source shouldn’t, search APIs where query term must be cached separately but page size might not need to be, and filtering scenarios where filter parameters determine cached content.

Additional vary-by options include vary-by-header for caching based on HTTP headers (like Accept-Language for localized responses), vary-by-user for per-user caching, and vary-by-developer for developer-specific responses in developer portal scenarios. These can be combined for sophisticated caching strategies.

Cache duration is configured with the duration attribute in seconds. Considerations include balancing freshness with backend load reduction, implementing cache invalidation strategies for content updates, and monitoring cache hit rates to optimize parameters.

cache-lookup-value and cache-store-value are for caching specific values in named caches, not HTTP responses. response-cache doesn’t exist as a standalone policy. Only cache-lookup/cache-store with vary-by options provide comprehensive HTTP response caching with query parameter awareness.

Q150

You need to implement Azure Container Instances with persistent storage across container restarts. What should you use?

A) Azure Files volume mount

B) emptyDir volume

C) Container filesystem

D) Azure Blob Storage NFS mount

Answer: A

Explanation:

Azure Files volume mount provides persistent storage for Container Instances that survives container restarts and can be shared across multiple container instances, meeting requirements for stateful containerized applications.

Azure Files integration with Container Instances enables mounting SMB file shares as volumes in containers. The volume mount appears as a directory in the container filesystem, where application can read and write files. Data persists in Azure Files even when containers stop, restart, or are redeployed, providing true persistent storage.

Configuration involves creating an Azure Files share, then specifying the volume mount in container group deployment including share name, storage account name, and storage account key. The container group authenticates to Azure Files automatically using these credentials. Multiple containers in the same container group can mount the same volume, enabling file sharing between containers.

Use cases include stateful applications requiring persistent configuration, applications generating logs or output files that must be preserved, shared storage for multi-container applications, and migrating traditional applications expecting filesystem persistence. Azure Files provides standard SMB semantics that most applications expect.

Performance considerations include Azure Files throughput limits based on storage account and share capacity, latency characteristics of SMB over network compared to local disk, and file locking behavior for concurrent access. Premium Azure Files offers higher performance when needed.

emptyDir volumes are temporary directories sharing container group lifetime, deleted when the group stops. Container filesystem is ephemeral, losing all data on container restart. Azure Blob Storage doesn’t support SMB mounting in Container Instances, only through specialized tools or blob FUSE in certain scenarios. Only Azure Files provides the native, persistent, SMB-compatible storage Container Instances need.

Q151

You are implementing Azure Cosmos DB stored procedures. You need to ensure all operations in the stored procedure are atomic. What mechanism provides this?

A) Manual transaction management

B) Stored procedures execute in implicit transactions within a partition

C) Application-level retry logic

D) Optimistic concurrency control

Answer: B

Explanation:

Stored procedures execute in implicit transactions within a partition provides automatic atomicity for all operations within the stored procedure, ensuring either all operations commit successfully or none commit, without requiring explicit transaction management.

Cosmos DB stored procedures execute within the database engine on the server side, operating on documents within a single logical partition. All operations in the stored procedure execute as an implicit ACID transaction scoped to that partition. If any operation fails or the stored procedure throws an exception, the entire transaction rolls back automatically, maintaining data consistency.

This transactional guarantee enables implementing complex business logic involving multiple documents atomically. For example, a stored procedure can read account balances, validate business rules, and update multiple documents knowing that if any step fails, previous changes rollback. This prevents partial updates that could corrupt data integrity.

The partition scope limitation is important – stored procedures cannot operate on documents across different partition keys in a single transaction. This aligns with Cosmos DB’s distributed architecture where partitions may be on different physical nodes. Transactions within a partition can be implemented efficiently without distributed coordination overhead.

Implementation best practices include designing partition keys to enable transactions on related documents, implementing idempotent stored procedures for safe retry, using continuation tokens for operations on large document sets, and returning meaningful results indicating success or specific failure reasons.

Performance considerations include execution timeout limits (default 5 seconds, configurable up to 60 seconds), request unit consumption for all operations in the transaction, and avoiding long-running transactions that could block partition resources.

Manual transaction management isn’t available in Cosmos DB’s programming model. Application-level retry handles transient failures but doesn’t provide atomicity. Optimistic concurrency with ETags prevents concurrent modification conflicts but doesn’t create transactions. Only implicit transaction scope in stored procedures guarantees atomic multi-document operations.

Q152

You need to implement Azure Functions with Circuit Breaker pattern for external API calls. What should you use?

A) Manual circuit breaker implementation in function code

B) Polly library with circuit breaker policy

C) Azure API Management circuit breaker

D) Retry policy in host.json

Answer: B

Explanation:

Polly library with circuit breaker policy provides a robust, well-tested implementation of the circuit breaker pattern with flexible configuration, enabling your Azure Functions to handle failing external services gracefully without manual implementation complexity.

Polly is a .NET resilience and transient fault-handling library providing policies including retry, circuit breaker, timeout, bulkhead isolation, and fallback. The circuit breaker policy monitors for consecutive failures, opening the circuit after a threshold is reached to prevent further calls to failing services. After a timeout period, the circuit moves to half-open, testing if the service recovered. Successful calls close the circuit, resuming normal operation.

Implementation involves installing the Polly NuGet package, defining a circuit breaker policy with configuration like failure threshold, break duration, and what constitutes a failure. The policy wraps external service calls, intercepting exceptions and tracking success/failure. Circuit breaker state transitions happen automatically based on policy configuration.

The pattern provides multiple benefits. It prevents cascading failures by failing fast when downstream services are unavailable, reducing resource consumption and latency waiting for timeouts. The service gets time to recover without continuous request load. Automatic recovery testing through half-open state eliminates manual intervention. Telemetry integration enables monitoring circuit state and failure patterns.

Advanced scenarios include multiple circuit breakers for different external services with appropriate thresholds, combining circuit breaker with retry policies for transient failures before breaking, fallback policies providing alternate responses when circuits are open, and policy wrapping for layered resilience.

Manual implementation is error-prone, missing edge cases that Polly handles. API Management circuit breakers work at gateway level but don’t help functions calling external APIs directly. host.json retry policies handle retries but don’t implement circuit breaking logic. Only Polly provides comprehensive, tested circuit breaker implementation ready for production use.

Q153

You are implementing Azure SQL Database with Auto-failover groups for disaster recovery. How should you configure connection strings in applications?

A) Hard-code primary server name

B) Use failover group listener endpoint

C) Implement manual failover switching in code

D) Use both primary and secondary server names with application logic

Answer: B

Explanation:

Use failover group listener endpoint provides automatic failover for database connections without application code changes, as the listener endpoint automatically routes connections to the current primary server.

Auto-failover groups provide read-write and read-only listener endpoints that remain constant regardless of which server is primary. The read-write listener (format: failovergroupname.database.windows.net) always points to the current primary database, automatically updating DNS when failover occurs. Applications use this listener in connection strings, automatically connecting to the correct server without code changes.

When failover happens (automatic or manual), the read-write listener updates to point to the new primary within seconds. Applications see connection failures during transition, which they handle through retry logic. Once DNS propagates, new connections automatically reach the new primary. Applications don’t need awareness of primary/secondary server topology or failover state.

The read-only listener (format: failovergroupname.secondary.database.windows.net) routes to secondary replicas, enabling read scale-out where reporting or analytics workloads run against secondaries without impacting primary. This provides load distribution while maintaining simple connection string management.

Implementation best practices include configuring appropriate connection timeout and retry policies in application code, enabling connection pooling for performance, monitoring failed connections during failover events, and testing failover procedures regularly to validate application resilience.

DNS caching considerations are important. Client-side DNS caching can delay connection to new primary after failover. Configure reasonable DNS TTL values and implement retry logic that handles brief connection failures. Modern ADO.NET and other Azure SQL client libraries handle this well with built-in retry policies.

Hard-coding server names requires application changes during failover. Manual failover switching adds complexity and delays. Using both names with application logic creates maintenance burden and error potential. Only listener endpoints provide truly automatic, transparent failover.

Q154

You need to implement Azure Event Grid with custom topics. Events should be filtered at subscription level to reduce unnecessary event delivery. What should you configure?

A) Event Grid advanced filters on subscriptions

B) Logic App conditions in subscriber

C) Azure Functions with filtering logic

D) Event Hub filters

Answer: A

Explanation:

Event Grid advanced filters on subscriptions enable server-side event filtering before delivery, reducing bandwidth, processing costs, and latency by delivering only relevant events to each subscriber.

Event Grid subscriptions support advanced filtering based on event properties, enabling sophisticated routing logic. Filters evaluate event data, subject, event type, and custom properties using operators including equals, not equals, greater than, less than, contains, begins with, ends with, and in (for multiple values). Multiple filter conditions combine with AND logic, and multiple filters within a subscription use OR logic.

The implementation involves defining filter criteria when creating event subscriptions. For example, a subscription might filter events where eventType equals “Order.Created” AND priority equals “High” AND region in (“US”, “EU”). Event Grid evaluates filters server-side before delivery, transmitting only matching events to subscriber endpoints.

Server-side filtering provides significant benefits. Subscribers receive only relevant events, reducing processing overhead. Network bandwidth decreases since irrelevant events aren’t transmitted. Event Grid handles filtering at scale without subscriber infrastructure. Filtering logic is declarative and centrally managed rather than distributed across multiple subscribers.

Advanced filter capabilities include numeric range comparisons for priority or amount thresholds, string operators for pattern matching on categories or tags, and boolean operators for flag checks. Filters work on both system properties and custom event data properties, providing comprehensive event routing control.

Logic App conditions and Function filtering logic work but process all events, consuming resources and bandwidth for filtering that Event Grid could do server-side. Event Hub doesn’t have Event Grid’s filtering capabilities. Only Event Grid advanced filters provide efficient, server-side event routing before delivery.

Q155

You are implementing Azure Cosmos DB change feed with multiple independent processors. Each processor should read the entire change feed independently. What should you configure?

A) Different lease containers for each processor

B) Same lease container shared by all processors

C) Multiple Cosmos DB accounts

D) Read directly from database without leases

Answer: A

Explanation:

Different lease containers for each processor enables independent change feed processing where each processor maintains its own checkpoint state, allowing multiple independent workflows to process the complete change feed at their own pace.

Change feed processor uses a lease container to track progress through the change feed. Leases represent partitions in the monitored container, and each lease contains a checkpoint indicating how far that processor has read. When multiple processors use the same lease container, they coordinate to distribute partition processing, ensuring each change is processed once across all processor instances. When processors use different lease containers, each maintains independent checkpoint state.

Independent processing enables multiple scenarios. You might have one processor synchronizing data to SQL Database, another sending notifications to users, and a third updating search indexes. Each processes the same changes but at different rates with different error handling and retry policies. If one processor falls behind or fails, others continue unaffected.

Configuration involves specifying unique lease container names when initializing change feed processors. Each processor connects to the same monitored container but different lease containers. The change feed provides consistent ordering within partitions, ensuring all processors see changes in the same sequence.

Resource considerations include lease container costs (minimal), read request units consumed by each processor reading change feed, and ensuring total throughput handles all processors. Change feed reads don’t incur RU charges on the monitored container, making multiple processors cost-effective for read-heavy scenarios.

Same lease container causes processors to coordinate and distribute work rather than processing independently. Multiple accounts are unnecessary and complicate management. Reading without leases means no checkpoint management, making recovery from failures impossible. Only separate lease containers provide independent processing with reliable checkpoint management.

Q156

You need to implement Azure App Service with custom domain and HTTPS. What are the required steps?

A) Add custom domain and enable HTTPS with free managed certificate

B) Add custom domain only, HTTPS is automatic

C) Purchase SSL certificate and configure manually

D) Use Azure CDN for HTTPS with custom domain

Answer: A

Explanation:

Add custom domain and enable HTTPS with free managed certificate provides the complete solution leveraging App Service managed certificates that automatically handle certificate issuance, renewal, and binding without manual certificate management.

Custom domain configuration involves verifying domain ownership by adding DNS records (CNAME or TXT) pointing to your App Service. Once verification succeeds, App Service binds the custom domain to your application. For HTTPS, App Service provides free managed certificates for custom domains, automatically creating and binding SSL/TLS certificates through partnership with DigiCert.

Managed certificates handle the entire certificate lifecycle automatically. App Service creates the certificate, binds it to your custom domain, and automatically renews it before expiration (45 days prior). No manual intervention is needed for renewals. The certificates support both root domains and subdomains, though root domains require additional DNS configuration for verification.

The process through Azure Portal is straightforward: navigate to custom domains, add domain with DNS verification, then navigate to TLS/SSL settings and create a managed certificate for the domain. Binding happens automatically. The certificate is free, removing cost barriers to HTTPS adoption.

Requirements include Basic tier or higher App Service Plan (Free and Shared tiers don’t support custom domains), DNS access to add verification records, and for root domains, ability to add A records pointing to App Service IP address. Wildcard certificates aren’t supported with managed certificates but work with imported certificates.

Manual SSL certificate purchase and configuration works but requires ongoing renewal management and costs. Azure CDN provides HTTPS but adds unnecessary complexity for simple App Service scenarios. App Service managed certificates provide the simplest, most cost-effective HTTPS solution for custom domains.

Q157

You are implementing Azure Functions with Table Storage input binding. The function needs to retrieve a single entity efficiently. What binding configuration should you use?

A) Table input binding with partition key and row key

B) Table input binding with filter expression

C) Table storage SDK in function code

D) Queue trigger with table query

Answer: A

Explanation:

Table input binding with partition key and row key provides the most efficient single-entity retrieval by using point query operation that Azure Table Storage optimizes, requiring only the specific entity’s partition and row key for direct access.

Table Storage organizes data by partition key and row key combination, which uniquely identifies each entity. Point queries using both keys execute with optimal performance and minimal cost, reading only the targeted entity without scanning. The input binding configuration specifies partition key and row key values, which can be static strings or dynamic expressions using trigger data.

The binding declaration uses attributes (C#) or binding configuration (JavaScript, Python) specifying table name, partition key, and row key. For example, a queue-triggered function might use message properties as partition and row keys to retrieve related entities. The binding resolves expressions at runtime, fetching the entity before function execution. If the entity exists, it’s passed to the function; if not, null is provided.

This approach provides multiple benefits. The declarative binding eliminates boilerplate table access code, making functions cleaner and more testable. Performance is optimal since point queries are the fastest Table Storage operation. Cost is minimal with point queries consuming only 1 RU. Error handling for entity not found is simplified through null checks.

The binding works with both ITableEntity types for full entity access and dynamic objects for flexible property access. You can also bind to collections for scenarios requiring multiple entities, though point queries with single entities are most efficient.

Filter expressions require query operations scanning potentially many rows, less efficient than point queries. SDK in function code works but adds boilerplate and doesn’t leverage binding’s declarative simplicity. Queue trigger with table query adds unnecessary complexity. Only input binding with partition and row key provides optimal single-entity retrieval.

Q158

You need to implement Azure Service Bus with message deferral for processing messages later based on business logic. What should you use?

A) Complete message and re-send later

B) Defer message with sequence number for later retrieval

C) Dead-letter message and retrieve later

D) Use scheduled messages

Answer: B

Explanation:

Defer message with sequence number for later retrieval provides the correct mechanism for temporarily skipping messages that cannot be processed now but should remain in the queue for later processing based on business conditions.

Message deferral in Service Bus enables receivers to postpone message processing by calling Defer on the received message. The message remains in the queue but moves out of the regular delivery sequence. A sequence number is returned which can be stored and used later to retrieve the deferred message explicitly using ReceiveDeferredMessage. This supports scenarios where message processing depends on conditions not yet met, like dependent messages arriving out of order.

The pattern works for order processing where later steps require earlier steps to complete first. When a step arrives before its prerequisites, it’s deferred. When prerequisites complete, deferred messages are retrieved by sequence number and processed. The queue maintains deferred messages indefinitely until explicitly retrieved or the message TTL expires.

Implementation considerations include persisting sequence numbers for later retrieval, implementing business logic to determine when to retrieve deferred messages, and handling cases where deferred messages’ prerequisites never arrive (using TTL expiration and dead-lettering). Deferred messages don’t count toward active message count for autoscaling purposes but do consume quota.

Security and ordering are maintained – only the receiver that deferred a message or has its sequence number can retrieve it. Messages can be deferred multiple times if conditions still aren’t met. The pattern integrates with peek-lock semantics for reliability.

Completing and re-sending creates new messages losing original metadata and ordering position. Dead-lettering is for failed messages, not business deferral. Scheduled messages are for future delivery of new messages, not deferring existing ones. Only deferral provides the semantics needed for conditional processing postponement.

Q159

You are implementing Azure Cosmos DB with Global Distribution. You need to configure automatic failover with specific region priority. What should you configure?

A) Read regions list only

B) Write region and read regions with failover priorities

C) Manual failover configuration

D) Multi-master with conflict resolution

Answer: B

Explanation:

Write region and read regions with failover priorities enables automatic failover where Cosmos DB promotes secondary regions to primary based on defined priority when the primary region becomes unavailable.

Global distribution in Cosmos DB allows adding multiple regions for read and write operations. With single-write region accounts, one region serves as the write region (primary) while others are read regions (secondaries). Automatic failover requires configuring failover priority for secondary regions, establishing the order in which regions are promoted during primary region outages.

Configuration involves specifying priority numbers (0 being highest) for each secondary region in the regions list. When Cosmos DB detects the primary region is unavailable through health monitoring, it automatically promotes the highest-priority available secondary to primary. Write operations automatically redirect to the new primary while read operations continue from other regions. When the original region recovers, it becomes a secondary.

The automatic failover process typically completes within minutes. Applications using Cosmos DB SDK with multi-region support automatically discover the new write region and reconnect without code changes. The SDK implements retry logic and region discovery, providing transparent failover from application perspective.

Consideration for failover priority includes geographic proximity to users for optimal latency after failover, regulatory or compliance requirements for data residency, cost optimization by preferring lower-cost regions, and availability zone support in failover regions for additional resilience.

Automatic failover provides better availability than manual failover which requires detection and operator intervention. Applications benefit from faster recovery. Manual failover remains available for planned maintenance or testing scenarios.

Read regions alone don’t provide write availability during primary region failure. Multi-master enables multiple write regions but uses different consistency and conflict resolution models. Only automatic failover with priorities provides the availability guarantees needed for single-write-region accounts during regional outages.

Q160

You need to implement Azure API Management with JWT validation for OAuth 2.0 access tokens. What policy should you configure?

A) Basic authentication policy

B) validate-jwt policy in inbound section

C) validate-certificate policy

D) check-header policy with custom validation

Answer: B

Explanation:

validate-jwt policy in inbound section provides comprehensive JWT token validation including signature verification, issuer validation, audience validation, and claims checking, ensuring only valid OAuth 2.0 access tokens access your APIs.

The validate-jwt policy inspects tokens in the Authorization header (or other configured locations), validates the signature using keys from the specified OpenID configuration endpoint or explicitly configured keys, and checks standard and custom claims. Invalid tokens result in 401 Unauthorized responses, blocking requests before they reach backend services.

Configuration includes specifying header name where tokens appear (typically Authorization), the OpenID metadata URL from your identity provider (Azure AD, IdentityServer, etc.) for automatic key retrieval, required claim values (audience, issuer), and optional custom claim requirements. The policy handles token signature validation using keys from the OpenID endpoint, supporting key rotation automatically.

The policy supports both symmetric and asymmetric signing keys, handles standard JWT claims like exp (expiration), nbf (not before), and custom claims for application-specific authorization. You can require specific scopes or roles, check custom claims, and combine JWT validation with other policies for comprehensive security.

Advanced scenarios include validating tokens from multiple identity providers using multiple validate-jwt policies, extracting claims and passing them to backends for context, and validating both primary tokens and refresh tokens. The policy integrates with Azure AD B2C, Auth0, IdentityServer, and any OAuth-compliant provider.

Error messages from validation failures can be customized to provide appropriate information to clients without exposing security details. The policy fails fast, preventing resource consumption by unauthorized requests.

Basic authentication uses username/password, not JWTs. Certificate validation is for client certificates, not bearer tokens. check-header policy lacks the JWT-specific validation logic needed for proper token security. Only validate-jwt provides comprehensive OAuth 2.0 token validation

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!