Microsoft AZ-204 Developing Solutions for Azure Exam Dumps and Practice Test Questions Set10 Q181-200

Visit here for our full Microsoft AZ-204 exam dumps and practice test questions.

Questions 181

You are implementing Azure Cosmos DB for a globally distributed application. The application requires strong consistency for write operations but can tolerate eventual consistency for read operations. What consistency level should you configure?

A) Strong

B) Bounded Staleness

C) Session

D) Eventual

Answer: B

Explanation:

Bounded Staleness consistency level provides the optimal balance for scenarios requiring strong consistency guarantees for writes while allowing slightly relaxed consistency for reads. This level guarantees that reads lag behind writes by at most a specified number of versions or time interval, providing predictable staleness boundaries.

When you configure Bounded Staleness, you specify either a maximum lag in terms of operations (versions) or time. For example, you might configure reads to lag behind writes by at most 100,000 operations or 5 minutes, whichever is reached first. Within a single region, Bounded Staleness behaves like Strong consistency, but across regions, it allows controlled staleness, improving read performance and availability.

This consistency level is ideal for globally distributed applications where you need strong ordering guarantees and want to avoid conflicts, but can accept bounded lag for reads in remote regions. Common use cases include financial applications requiring audit trails, inventory management systems, and applications with read-heavy workloads across multiple regions.

Bounded Staleness provides better read performance and availability compared to Strong consistency because reads don’t need to wait for replication across all regions. It still maintains the total order of operations, meaning all replicas see operations in the same order, preventing anomalies like reading older versions after reading newer ones.

Option A (Strong consistency) provides linearizability and ensures reads always return the most recent committed write, but it comes with higher latency and lower availability because reads must wait for replication across all regions. This impacts performance significantly in multi-region deployments.

Option C (Session consistency) guarantees consistency within a single client session but doesn’t provide strong guarantees across different sessions or clients. While it offers good performance, it doesn’t meet the requirement for strong consistency across write operations from different clients.

Option D (Eventual consistency) provides the highest availability and lowest latency but offers no guarantees about read staleness. Reads might return significantly outdated data, which doesn’t satisfy the requirement for controlled consistency.

Questions 182

You need to implement an Azure Function that processes messages from multiple Azure Service Bus queues. The function must scale independently for each queue. What should you configure?

A) Single function with multiple ServiceBusTrigger attributes

B) Separate function for each queue with individual ServiceBusTrigger

C) Single function with dynamic queue binding

D) Function with input binding to multiple queues

Answer: B

Explanation:

Creating separate functions for each queue with individual ServiceBusTrigger attributes is the recommended approach for independent scaling. Each function has its own scaling controller that monitors the queue length and scales instances independently based on the specific queue’s workload, providing optimal resource utilization.

Azure Functions’ scaling mechanism evaluates each function independently. When you have separate functions for different queues, the Functions runtime monitors each queue’s metrics separately. If one queue has a high message backlog, only that function scales out, while other functions with lower workloads maintain fewer instances. This prevents over-provisioning and optimizes costs.

Each function can also have different configurations specific to its queue’s requirements. You can set different batch sizes, maximum concurrent calls, and scaling thresholds. For example, one queue handling critical transactions might have higher concurrency limits, while another handling background tasks might have lower limits to prevent resource contention.

The separate function approach also improves maintainability and monitoring. Each function appears as a distinct entity in Application Insights, making it easier to track performance, errors, and resource consumption per queue. You can set different alert thresholds and apply different retry policies based on message importance.

Option A (multiple ServiceBusTrigger attributes on single function) is not supported in Azure Functions. A function can only have one trigger, and you cannot apply multiple trigger attributes to the same function method.

Option C (dynamic queue binding) allows the queue name to be determined at runtime through binding expressions, but all invocations still share the same scaling behavior. The function scales based on aggregate metrics across all queues, not independently per queue.

Option D (input binding to multiple queues) is not a valid pattern for triggers. Input bindings are for reading data within a function execution, not for triggering function execution based on new messages.

Questions 183

You are developing a REST API using Azure API Management. The API must transform JSON responses to XML for legacy clients. What should you implement?

A) Inbound processing policy with json-to-xml transformation

B) Outbound processing policy with json-to-xml transformation

C) Backend policy with content transformation

D) Custom middleware in the backend service

Answer: B

Explanation:

Outbound processing policy with json-to-xml transformation is the correct approach because response transformation occurs in the outbound pipeline after the backend service returns the response. The outbound section processes the response before sending it to the client, making it the appropriate place for converting JSON responses to XML format.

Azure API Management policies execute in a specific order through four sections: inbound, backend, outbound, and on-error. The json-to-xml policy should be placed in the outbound section to transform the JSON response from your backend service into XML format before it reaches the client. This transformation is transparent to the backend service, which continues to work with JSON.

The policy configuration is straightforward and can include options for handling arrays, namespaces, and attribute mapping. You can apply the policy at different scopes including global, product, API, or operation level, allowing you to selectively transform responses based on requirements. For example, you might apply transformation only to specific operations or for clients identified by subscription keys.

This approach keeps your backend service simple and JSON-based while supporting legacy XML clients through API Management transformation. You can also use conditional policies to apply transformation only when clients request XML format through Accept headers, supporting both JSON and XML clients with the same backend.

Option A (inbound processing policy) executes before the request reaches the backend service and is used for transforming requests, not responses. Using json-to-xml in the inbound section would attempt to transform the incoming request, which doesn’t make sense for response transformation.

Option C (backend policy) is used for controlling how API Management communicates with the backend service, such as modifying the backend URL or adding headers to backend requests. It’s not the appropriate place for response transformation.

Option D (custom middleware in backend service) would require implementing transformation logic in your service code, adding complexity and coupling your service to client format requirements rather than keeping this concern in the API gateway layer.

Questions 184

You need to implement Azure Application Insights for a microservices architecture with distributed tracing. Each service must contribute to the same end-to-end transaction trace. What should you ensure?

A) Each service uses the same instrumentation key

B) Request correlation headers are propagated between services

C) All services log to the same Log Analytics workspace

D) Each service implements custom correlation logic

Answer: B

Explanation:

Propagating request correlation headers between services is essential for distributed tracing in Application Insights. The correlation headers, specifically Request-Id and Correlation-Context following the W3C Trace Context standard, link telemetry across service boundaries, creating complete end-to-end transaction traces.

When a request enters your system, Application Insights generates a unique operation ID. As the request flows through different microservices, each service must extract correlation headers from incoming requests and include them in outgoing requests to downstream services. This header propagation creates a chain of correlated telemetry that Application Insights uses to reconstruct the complete transaction flow.

Modern Application Insights SDKs automatically handle correlation header propagation for HTTP requests. When you use HttpClient in .NET Core with Application Insights configured, the SDK automatically adds correlation headers to outgoing requests and extracts them from incoming requests. Similar automatic instrumentation exists for other platforms and languages.

The Application Map feature in Application Insights uses these correlations to visualize service dependencies and show how requests flow through your architecture. The end-to-end transaction view displays the complete trace with timing information for each service call, making it easy to identify performance bottlenecks or failures across the distributed system.

Option A (same instrumentation key) is recommended for correlating telemetry across services and simplifying configuration, but the instrumentation key alone doesn’t create the correlation. Services could share an instrumentation key but still produce uncorrelated traces without proper header propagation.

Option C (same Log Analytics workspace) relates to where telemetry data is stored and queried but doesn’t establish correlation between requests. Multiple services can log to the same workspace without creating correlated traces if headers aren’t propagated.

Option D (custom correlation logic) is unnecessary because Application Insights SDKs handle correlation automatically. Implementing custom logic would duplicate functionality and likely introduce inconsistencies with the standard correlation mechanism.

Questions 185

You are implementing Azure Key Vault for storing application secrets. The application runs on Azure App Service. You need to grant the application access to secrets without using connection strings or keys. What should you configure?

A) Access policies with service principal credentials

B) System-assigned managed identity with Key Vault access policy

C) Shared access signatures for Key Vault

D) Azure AD application registration with client secret

Answer: B

Explanation:

System-assigned managed identity with Key Vault access policy provides the most secure and recommended approach for granting App Service access to Key Vault without using credentials. Managed identity eliminates the need for storing any credentials in your application code or configuration, as Azure automatically manages the identity lifecycle and authentication.

When you enable system-assigned managed identity on your App Service, Azure creates an identity in Azure AD that is tied to the App Service’s lifecycle. This identity is automatically deleted when the App Service is deleted. Your application code authenticates to Key Vault using this identity through the Azure SDK, without requiring any credentials in code or configuration files.

After enabling managed identity, you configure Key Vault access policies to grant the managed identity permission to read secrets. You can specify granular permissions like Get Secrets, List Secrets, or specific secret names. The combination of managed identity for authentication and access policies for authorization provides secure access control.

In your application code, you use DefaultAzureCredential from the Azure SDK, which automatically discovers and uses the managed identity when running on Azure. The same code works in development using your developer credentials and in production using managed identity, without code changes. This approach follows zero-trust security principles by eliminating long-lived credentials.

Option A (access policies with service principal credentials) requires storing a client ID and secret somewhere, defeating the purpose of eliminating credentials. Service principals with credentials are less secure than managed identities for Azure resources.

Option C (shared access signatures) are used for Azure Storage access delegation, not for Key Vault authentication. Key Vault uses Azure AD authentication with either service principals or managed identities.

Option D (Azure AD application registration with client secret) requires storing the client secret in your application configuration, which creates the same security risk that managed identity eliminates. This approach is necessary only when managed identity isn’t available.

Questions 186

You need to implement automatic scaling for an Azure App Service based on CPU usage. The scaling should occur when average CPU exceeds 70% for 5 minutes. What should you configure?

A) Scale-up rule in App Service plan

B) Autoscale rule with CPU percentage metric

C) Manual scale settings with instance count

D) Azure Monitor alert with automation runbook

Answer: B

Explanation:

Autoscale rule with CPU percentage metric is the native and recommended approach for implementing automatic scaling based on CPU usage in Azure App Service. Autoscale provides declarative scaling rules that automatically adjust instance count based on metric thresholds, ensuring your application has sufficient resources during high load periods.

When you configure autoscale, you create a scale condition with rules defining when to scale out (add instances) and scale in (remove instances). For CPU-based scaling, you create a rule that monitors the average CPU percentage metric across all instances. The rule specifies the threshold (70%), the time window for evaluation (5 minutes), and the action to take (increase instance count by a specified amount).

Autoscale includes important features for stable scaling behavior. The cooldown period prevents rapid scaling oscillations by waiting a specified time after a scale operation before evaluating rules again. This prevents situations where scaling out reduces CPU, triggering an immediate scale-in. You can configure different cooldown periods for scale-out and scale-in operations.

The autoscale profile supports multiple scale conditions for different scenarios. You can create schedule-based rules for predictable load patterns, date-specific rules for known events, and metric-based rules for responsive scaling. You can also set minimum and maximum instance counts to control costs and ensure availability. Autoscale integrates with Application Insights for custom metrics if CPU isn’t the right indicator.

Option A (scale-up rule) refers to vertical scaling, which changes the App Service plan tier to use more powerful instances. This doesn’t address horizontal scaling based on metrics and requires manual intervention or different automation approaches.

Option C (manual scale settings) requires you to manually adjust the instance count. This doesn’t provide automatic scaling based on metrics and is not responsive to load changes without human intervention.

Option D (Azure Monitor alert with automation runbook) is a custom approach requiring scripts to scale resources, adding complexity and maintenance overhead compared to the built-in autoscale functionality that provides this capability natively.

Questions 187

You are implementing Azure Cognitive Services Custom Vision for image classification. You need to improve model accuracy. What should you do?

A) Increase the number of iterations during training

B) Provide more diverse training images for each tag

C) Use a higher pricing tier for the service

D) Increase the image resolution in training data

Answer: B

Explanation:

Providing more diverse training images for each tag is the most effective approach for improving Custom Vision model accuracy. Diversity in training data helps the model learn to recognize objects under various conditions, lighting, angles, backgrounds, and contexts, making it more robust and accurate in real-world scenarios.

Model accuracy depends heavily on the quality and diversity of training data. For effective training, you should provide images showing the object from different angles, distances, and perspectives. Include variations in lighting conditions, backgrounds, and contexts where the object might appear. For example, if training a model to recognize cars, include images of cars in different colors, models, weather conditions, and surroundings.

The recommended approach is to start with at least 50 diverse images per tag, though more images generally improve accuracy up to a point. Quality matters more than quantity – 50 well-chosen diverse images often outperform 200 similar images. You should also ensure balanced training data with similar numbers of images across tags to prevent model bias toward overrepresented categories.

Custom Vision provides performance metrics after each training iteration, including precision, recall, and average precision. These metrics help you identify which tags need more training data or better diversity. You can iteratively improve the model by analyzing misclassifications and adding training images that address specific weaknesses.

Option A (increasing iterations) doesn’t improve accuracy if the training data itself is insufficient or lacks diversity. More iterations with limited data can lead to overfitting, where the model memorizes training images but performs poorly on new images.

Option C (higher pricing tier) provides more training capacity and features but doesn’t directly improve model accuracy. The basic tier can achieve excellent accuracy with proper training data and technique.

Option D (higher image resolution) can help in some cases, especially for small objects or fine details, but Custom Vision automatically resizes images during training. Diversity in content matters more than resolution for most classification tasks.

Questions 188

You need to implement a solution that processes streaming data from IoT devices in real-time and stores aggregated results. What should you use?

A) Azure Event Hub with Azure Stream Analytics and Cosmos DB

B) Azure Service Bus with Azure Functions and SQL Database

C) Azure Storage Queue with Azure Logic Apps and Table Storage

D) Azure Event Grid with Azure Functions and Blob Storage

Answer: A

Explanation:

Azure Event Hub with Azure Stream Analytics and Cosmos DB provides the complete solution optimized for real-time streaming data processing from IoT devices. This combination offers high-throughput ingestion, real-time analytics processing, and scalable storage for aggregated results.

Azure Event Hub serves as the ingestion point, capable of receiving millions of events per second from IoT devices. It provides reliable buffering and partitioning for parallel processing. Event Hub retains events for a configurable period, allowing reprocessing if needed. The service integrates seamlessly with IoT Hub for device management scenarios.

Azure Stream Analytics processes the streaming data in real-time using SQL-like queries. It performs windowing operations for time-based aggregations, joins streams with reference data, and detects patterns or anomalies. Stream Analytics supports tumbling, hopping, and sliding windows for different aggregation scenarios. The service scales automatically based on Streaming Units to handle varying loads.

Cosmos DB serves as the output destination for aggregated results, providing low-latency writes and global distribution. Its ability to handle high write throughput makes it ideal for storing continuous stream processing results. Cosmos DB’s flexible schema accommodates changing data structures, and its indexing automatically optimizes queries on aggregated data.

Option B (Service Bus with Functions and SQL Database) is designed for message-based communication rather than high-throughput streaming. Service Bus has lower throughput limits compared to Event Hub, and SQL Database may become a bottleneck for high-volume streaming writes.

Option C (Storage Queue with Logic Apps and Table Storage) doesn’t provide real-time processing capabilities. Storage Queues have polling delays, Logic Apps aren’t optimized for streaming analytics, and the solution would have higher latency compared to Stream Analytics.

Option D (Event Grid with Functions and Blob Storage) is event-driven rather than streaming-focused. Event Grid excels at discrete events with routing logic, but doesn’t provide the windowing and aggregation capabilities that Stream Analytics offers for continuous streaming data.

Questions 189

You are developing an Azure Function that needs to call multiple downstream APIs in parallel to reduce overall latency. What pattern should you implement?

A) Sequential API calls with await keyword

B)WhenAll with concurrent async operations

C) Separate function invocations for each API

D) Durable Functions with fan-out/fan-in pattern

Answer: B

Explanation:

Task.WhenAll with concurrent async operations provides the most efficient pattern for calling multiple APIs in parallel within a single Azure Function execution. This approach allows multiple HTTP requests to execute concurrently, significantly reducing total execution time compared to sequential calls.

When you use Task.WhenAll, you create multiple async tasks without immediately awaiting them, allowing them to execute concurrently. After creating all tasks, you await Task.WhenAll, which completes when all tasks finish. This pattern is ideal when API calls are independent and don’t depend on each other’s results. For example, calling three APIs sequentially might take 300ms each for 900ms total, while parallel execution takes approximately 300ms total.

The implementation is straightforward in C# using HttpClient. You create multiple HTTP request tasks, collect them in an array or list, and await Task.WhenAll. The result is an array containing responses from all API calls. You can also use Task.WhenAny if you need to proceed as soon as the first API responds, or implement timeout handling per task.

This pattern respects async/await best practices by not blocking threads. While tasks execute, the thread returns to the thread pool, allowing the Function runtime to handle other requests efficiently. This is crucial in Azure Functions where resource utilization affects cost and scalability. The pattern also handles exceptions gracefully, allowing you to implement retry logic or fallback behavior for individual API failures.

Option A (sequential calls with await) calls APIs one after another, waiting for each to complete before starting the next. This maximizes latency as total time equals the sum of all API call durations plus network overhead.

Option C (separate function invocations) introduces significant overhead including function cold starts, additional logging, and coordination complexity. This pattern is appropriate for long-running or independent tasks but excessive for parallel API calls within a single logical operation.

Option D (Durable Functions fan-out/fan-in) is designed for long-running workflows or scenarios requiring persistent orchestration state. For simple parallel API calls, it introduces unnecessary complexity and overhead including additional storage operations for checkpointing.

Questions 190

You need to configure Azure Cosmos DB to automatically delete documents after a specific time period. What should you implement?

A) Change feed with Azure Function for deletion

B) Time to Live (TTL) property on documents

C) Stored procedure with scheduled execution

D) Cosmos DB trigger with deletion logic

Answer: B

Explanation:

Time to Live (TTL) property on documents is the native and most efficient solution for automatically deleting documents after a specific period. TTL is a built-in Cosmos DB feature that automatically removes documents without consuming request units for the deletion operation, making it cost-effective and performant.

When you enable TTL on a Cosmos DB container, you can set a default TTL value in seconds that applies to all documents, or specify individual TTL values on specific documents through the ttl property. Documents are automatically deleted when the TTL period expires after their last modification timestamp. A TTL value of -1 means the document never expires, while null or absence of the property means the document inherits the container’s default TTL.

The deletion process runs as a background task that doesn’t consume provisioned throughput. Cosmos DB marks expired documents as deleted and removes them during subsequent compaction processes. While there may be a slight delay between expiration and physical deletion, expired documents are filtered from queries immediately, so they’re not visible to applications even before physical removal.

TTL is ideal for scenarios like session management, temporary data caching, audit log retention, and any use case requiring automatic data cleanup. You can combine TTL with change feed to perform cleanup actions before document deletion, such as archiving data or triggering notifications.

Option A (change feed with Azure Function) introduces complexity and costs. You would need to track document creation times, calculate expiration, and issue delete operations that consume RUs. This approach requires maintaining infrastructure and handling errors, unlike the built-in TTL feature.

Option C (stored procedure with scheduled execution) requires external orchestration to trigger the stored procedure regularly. Stored procedures consume RUs for both querying expired documents and deleting them. This approach is less efficient and requires more maintenance than TTL.

Option D (Cosmos DB trigger) executes in response to data changes but doesn’t provide scheduling functionality for periodic cleanup. Triggers also consume RUs and require Azure Functions infrastructure for hosting.

Questions 191

You are implementing Azure Front Door for a global web application. You need to ensure that users are routed to the nearest healthy backend. What routing method should you configure?

A) Weighted routing

B) Priority routing

C) Latency routing

D) Session affinity routing

Answer: C

Explanation:

Latency routing in Azure Front Door automatically directs users to the backend with the lowest network latency, ensuring optimal performance by connecting users to the geographically nearest healthy backend. This routing method continuously measures latency from Front Door’s edge locations to your backends and routes traffic accordingly.

Front Door performs health probes to determine backend availability and measures latency from its globally distributed edge locations to each backend. When a request arrives at a Front Door edge location, the service evaluates which backends are healthy and selects the one with the lowest latency from that specific edge location. This ensures users consistently get the best possible performance.

The latency-based routing works automatically without requiring manual configuration of geographic mappings. As your backend infrastructure changes or as network conditions vary, Front Door adapts routing decisions in real-time. The service also implements automatic failover when a backend becomes unhealthy, routing traffic to the next best available backend.

This routing method is ideal for globally distributed applications where you have multiple regional deployments and want to ensure each user connects to their closest instance. Combined with Front Door’s global anycast network, latency routing provides excellent performance and availability for worldwide audiences.

Option A (weighted routing) distributes traffic across backends according to configured weight values, used for gradual rollouts or A/B testing scenarios. It doesn’t consider latency or geographic proximity, so users might be routed to distant backends.

Option B (priority routing) sends all traffic to the highest priority backend and fails over to lower priority backends only when higher priority ones are unavailable. This is used for active-passive failover scenarios rather than performance optimization.

Option D (session affinity) routes subsequent requests from the same user to the same backend to maintain session state, but it doesn’t ensure users are initially routed to the lowest latency backend. It’s used alongside other routing methods to maintain stickiness.

Questions 192

You need to implement Azure Notification Hubs to send push notifications to mobile devices across multiple platforms. What should you configure?

A) Platform Notification Service (PNS) credentials for each platform

B) Single API endpoint with device tokens

C) Azure Event Grid subscriptions

D) Azure Service Bus topics for each platform

Answer: A

Explanation:

Configuring Platform Notification Service (PNS) credentials for each platform is the required setup for Azure Notification Hubs to send cross-platform push notifications. Each mobile platform (iOS, Android, Windows) has its own PNS that requires specific credentials for authentication and authorization to send notifications.

For iOS devices, you configure Apple Push Notification Service (APNS) credentials, either using certificate-based authentication with a .p12 certificate or token-based authentication with an authentication key. For Android, you configure Firebase Cloud Messaging (FCM) credentials including the server key or service account JSON. For Windows, you configure Windows Push Notification Service (WNS) credentials from the Microsoft Store developer center.

After configuring PNS credentials in your Notification Hub, you can send notifications to devices across all platforms using a single API call. Notification Hubs handles the platform-specific communication with each PNS, translating your notification content into the appropriate format for each platform. This abstraction simplifies your backend code significantly.

The service also provides advanced features like templates that allow devices to register for notifications with platform-specific formatting, tags for audience segmentation, and scheduled notifications. You can send notifications to millions of devices with a single API call, and Notification Hubs handles the scaling and fan-out to individual PNS services.

Option B (single API endpoint with device tokens) describes how you might implement push notifications manually without Notification Hubs. While Notification Hubs does provide a unified API, you still need to configure PNS credentials for it to communicate with each platform’s notification service.

Option C (Event Grid subscriptions) is used for event-driven architectures and routing events to subscribers, not for sending push notifications to mobile devices through platform-specific notification services.

Option D (Service Bus topics) provide publish-subscribe messaging for application integration but don’t provide the platform-specific push notification delivery capabilities that Notification Hubs offers through PNS integration.

Questions 193

You are developing an Azure Function that must process messages from Azure Service Bus in exactly the order they were sent. What should you implement?

A) Configure message sessions on the Service Bus queue

B) Set ReceiveMode to PeekLock in the function

C) Enable duplicate detection on the queue

D) Implement custom message sequencing logic

Answer: A

Explanation:

Configuring message sessions on the Service Bus queue ensures that messages are processed in exactly the order they were sent. Sessions provide first-in-first-out (FIFO) guarantees by grouping related messages together using a session identifier, and ensuring that all messages in a session are processed sequentially by a single receiver.

When you enable sessions on a Service Bus queue, each message must include a session ID. All messages with the same session ID form a session and are processed in order. The Service Bus locks the entire session to a single receiver, preventing concurrent processing of messages within that session. The receiver processes messages one by one in the order they were enqueued, guaranteeing FIFO ordering.

In Azure Functions, you configure session handling by setting IsSessionsEnabled to true in the ServiceBusTrigger attribute and setting the function to process sessions. The function receives the session ID and processes all messages in that session sequentially. The function completes processing one message before receiving the next message from the session.

Sessions are essential for scenarios requiring ordered processing like order fulfillment workflows, financial transactions, or any business process where message sequence matters. You can have multiple sessions processing concurrently (different session IDs), but within each session, processing is strictly ordered.

Option B (PeekLock receive mode) controls message lock behavior and supports at-least-once delivery semantics but doesn’t guarantee message ordering. Multiple receivers could process messages concurrently in any order when not using sessions.

Option C (duplicate detection) prevents duplicate messages from being enqueued based on message ID but doesn’t provide ordering guarantees. Duplicate detection works based on a time window and message IDs, independent of processing order.

Option D (custom sequencing logic) would require implementing ordering logic in your function code, maintaining state about processed messages, and handling concurrency issues. This is complex, error-prone, and unnecessary when sessions provide built-in ordering.

Questions 194

You need to implement caching for an API that returns data that changes infrequently. The cache must expire after 1 hour. What should you configure in Azure API Management?

A) Inbound policy with cache-lookup and outbound policy with cache-store

B) Backend policy with caching directives

C) Outbound policy with cache-store only

D) Product-level caching settings

Answer: A

Explanation:

Configuring inbound policy with cache-lookup and outbound policy with cache-store provides complete caching implementation in Azure API Management. The cache-lookup policy in the inbound section checks for cached responses before forwarding requests to the backend, while cache-store in the outbound section saves responses to cache for subsequent requests.

The cache-lookup policy should be placed early in the inbound processing pipeline. When a request arrives, this policy generates a cache key based on request attributes like URL, query parameters, or headers. If a cached response exists and hasn’t expired, API Management returns it immediately without calling the backend service. This significantly reduces backend load and improves response times.

The cache-store policy in the outbound section stores the backend response in cache with a specified duration. You configure the duration attribute to 3600 seconds (1 hour) to match your expiration requirement. The policy can include vary-by attributes to create different cache entries based on query parameters, headers, or other request characteristics, allowing granular cache control.

This two-policy approach provides flexibility for cache configuration. You can implement conditional caching based on request attributes, vary cache duration by operation, or exclude certain responses from caching based on status codes or content. API Management supports both internal cache and external Redis cache for larger-scale scenarios.

Option B (backend policy with caching directives) controls how API Management communicates with the backend but doesn’t implement response caching. Backend policies modify requests sent to backend services, not responses returned to clients.

Option C (outbound policy with cache-store only) stores responses in cache but without cache-lookup in the inbound section, every request still reaches the backend service. The cache would be populated but never used, defeating the purpose of caching.

Option D (product-level caching settings) doesn’t exist as a native API Management feature. Caching is configured through policies at operation, API, product, or global level using cache-lookup and cache-store policies.

Questions 195

You are implementing Azure DevOps pipeline for deploying containerized applications to Azure Kubernetes Service. The pipeline must build images, scan for vulnerabilities, and deploy to AKS. What should you include?

A) Docker task, vulnerability scanning task, and kubectl apply task

B) Azure Container Registry build task and AKS deployment task

C) Docker Compose task and Helm deployment task

D) Build task, push task, and manual deployment script

Answer: B

Explanation:

Azure Container Registry build task and AKS deployment task provide the integrated and recommended approach for containerized application deployment pipelines. ACR build task handles container image building with built-in vulnerability scanning through Azure Security Center integration, while the AKS deployment task manages deployment orchestration with proper authentication and rollout controls.

The ACR build task builds container images directly in Azure Container Registry without requiring a Docker daemon in your pipeline agent. It uses the ACR Tasks feature which provides efficient layer caching, multi-stage build support, and automatic base image updates. ACR integrates with Microsoft Defender for Containers to scan images for vulnerabilities, providing security insights before deployment.

After images are built and scanned in ACR, the AKS deployment task deploys to your cluster using Kubernetes manifests, Helm charts, or Kustomize configurations. The task handles authentication to AKS using service connections, supports different deployment strategies like rolling updates or blue-green deployments, and provides built-in validation to ensure deployments succeed before completing the pipeline.

This approach follows Azure-native best practices, utilizing managed services for heavy lifting. ACR handles image building and storage with geo-replication support, while the AKS deployment task abstracts kubectl complexity and provides declarative deployment configuration within Azure DevOps. The pipeline is more maintainable and benefits from continuous Azure service improvements.

Option A (separate Docker, scanning, and kubectl tasks) requires manual orchestration of multiple steps including Docker installation on agents, separate vulnerability scanning tool configuration, and kubectl authentication setup. This approach is more complex and requires more maintenance.

Option C (Docker Compose and Helm) is viable but Docker Compose is typically used for local development rather than CI/CD pipelines. While Helm is a valid deployment approach, the native AKS deployment task provides better integration with Azure DevOps.

Option D (manual deployment script) introduces maintenance overhead and potential errors from custom scripting. Native Azure DevOps tasks provide better reliability, logging, and error handling compared to custom scripts.

Questions 196

You need to implement authentication for an ASP.NET Core web application that allows users to sign in with Microsoft, Google, and Facebook accounts. What should you implement?

A) Azure AD B2C with external identity providers

B) Azure AD with guest user invitations

C) Individual external OAuth providers in Startup.cs

D) OpenID Connect with custom provider selection

Answer: A

Explanation:

Azure AD B2C with external identity providers offers the comprehensive and manageable solution for supporting multiple social identity providers in your application. B2C is specifically designed for customer-facing applications and provides built-in integration with major social identity providers including Microsoft, Google, Facebook, Amazon, LinkedIn, and Twitter.

B2C centralizes identity provider configuration and user management in Azure rather than in your application code. You configure each identity provider once in the Azure portal by registering your application with the provider and adding the credentials to B2C. B2C then handles the complete OAuth/OpenID Connect flow including redirects, token exchange, and user profile retrieval.

The service provides customizable user flows (sign-up, sign-in, profile edit, password reset) with configurable UI through custom policies or built-in page customization. Users see all available identity providers on the sign-in page and can link multiple providers to a single B2C account. B2C maintains the user profile and handles account linking across providers.

In your ASP.NET Core application, you simply configure authentication middleware to use Azure AD B2C with your tenant details and user flow names. The middleware handles all authentication protocol details, and your application receives standard claims regardless of which identity provider the user chose. This abstraction simplifies your application code significantly.

Option B (Azure AD with guest invitations) is designed for business-to-business scenarios where external users are invited to access organizational resources. It doesn’t provide the same social identity provider integration or user experience that B2C offers for customer-facing applications.

Option C (individual external OAuth providers in Startup.cs) requires implementing and maintaining OAuth configuration for each provider separately in your application code. You need to handle registration with each provider, manage credentials, implement callback handling, and maintain user profile mapping. This approach is more complex and tightly couples identity provider logic to your application.

Option D (OpenID Connect with custom provider selection) would require building custom logic to present provider choices, manage multiple provider configurations, and handle the authentication flow for each. This duplicates functionality that Azure AD B2C provides out of the box.

Questions 197

You are developing an Azure Function that processes large files. The function must handle files up to 500 MB. What hosting plan should you use?

A) Consumption plan with increased timeout

B) Premium plan with larger instance sizes

C) App Service plan with Always On enabled

D) Dedicated plan with manual scaling

Answer: B

Explanation:

Premium plan with larger instance sizes provides the optimal hosting solution for Azure Functions processing large files. The Premium plan offers more memory (up to 14 GB), faster processors, and VNET integration while maintaining the serverless scaling capabilities needed for variable workloads.

Processing 500 MB files requires substantial memory to load and process the file contents. The Consumption plan provides only 1.5 GB of memory per instance, which is insufficient for large file processing along with application overhead. Premium plan instances with EP3 size provide 14 GB memory, accommodating large files comfortably. The plan also supports pre-warmed instances, eliminating cold starts that could timeout during large file processing.

Premium plan provides unlimited execution duration compared to the Consumption plan’s 10-minute maximum (5 minutes default). Large file processing often requires extended processing time for operations like parsing, transformation, or analysis. The Premium plan also offers higher throughput for file I/O operations, crucial when working with large files from Azure Storage or other sources.

The plan maintains event-driven scaling like Consumption plan, automatically adding instances during high load periods and scaling to zero during idle periods. This provides cost efficiency while ensuring resources are available when needed. Premium plan also includes VNET integration for secure access to resources and enhanced networking capabilities for faster file transfers.

Option A (Consumption plan with increased timeout) allows extending timeout to 10 minutes maximum, but this may be insufficient for 500 MB files. More critically, the 1.5 GB memory limit makes processing large files problematic or impossible depending on processing requirements.

Option C (App Service plan with Always On) provides a dedicated compute environment but charges continuously regardless of usage. While it offers sufficient resources, it’s less cost-effective than Premium plan for variable workloads. Always On prevents cold starts but doesn’t provide the same autoscaling capabilities.

Option D (Dedicated plan with manual scaling) requires manual intervention to adjust capacity based on load, lacking the automatic scaling that serverless plans provide. This increases operational overhead and may result in either insufficient capacity during peaks or wasted resources during low usage.

Questions 198

You need to implement Azure Event Grid to route events to different handlers based on event properties. What should you configure?

A) Event subscription with subject filters

B) Event Grid domain with topic routing

C) Event subscription with advanced filters

D) Event Hub capture with routing rules

Answer: C

Explanation:

Event subscription with advanced filters provides the comprehensive filtering capabilities needed to route events based on event properties. Advanced filters allow you to create complex filtering logic using operators that evaluate any property in the event data, enabling precise control over which events reach specific handlers.

Advanced filters support multiple operators including NumberIn, NumberNotIn, NumberLessThan, NumberGreaterThan, StringContains, StringBeginsWith, StringEndsWith, and BoolEquals. You can filter on any property in the event data using dot notation for nested properties. For example, you might filter events where data.orderValue > 1000 AND data.region = ‘West’ AND subject beginsWith ‘/orders/priority’.

Each event subscription supports up to 25 filter conditions combined with AND logic. This allows complex routing scenarios where different event subscriptions with different filters route events to appropriate handlers. For instance, high-value orders might route to a premium processing function while standard orders go to regular processing, all based on event property values.

Advanced filters are evaluated server-side in Event Grid before events are delivered to subscribers. This means filtered events never leave Event Grid’s infrastructure, reducing network traffic, subscriber processing load, and costs. The filtering is highly efficient and doesn’t impact event delivery latency for matching events.

Option A (subject filters) only match the beginning or end of the event subject field using simple string matching. While useful for basic scenarios, subject filters don’t provide the property-based routing capabilities that advanced filters offer for complex event routing logic.

Option B (Event Grid domain with topic routing) helps organize multiple event sources and subscriptions under a single domain for management purposes, but event routing still relies on filters configured in event subscriptions. Domains simplify management but don’t provide filtering capabilities themselves.

Option D (Event Hub capture with routing rules) is a different service and feature. Event Hub Capture automatically saves event data to Blob Storage or Data Lake Storage but doesn’t provide event routing to different handlers based on properties like Event Grid does.

Questions 199

You are implementing Azure Cosmos DB with the SQL API. You need to execute a query that returns items across multiple partitions efficiently. What should you ensure?

A) Include the partition key in the WHERE clause

B) Enable cross-partition queries in the query options

C) Use stored procedures for cross-partition operations

D) Configure unlimited container throughput

Answer: B

Explanation:

Enabling cross-partition queries in the query options allows Cosmos DB to execute queries across all partitions when the query doesn’t include a partition key filter. By setting EnableCrossPartitionQuery to true in the query request options, you explicitly allow the SDK to fan out the query to all partitions and aggregate results.

Cross-partition queries work by the Cosmos DB SDK sending the query to all physical partitions, executing the query against each partition’s data in parallel, and then merging the results. While this enables powerful querying capabilities without knowing the partition key, it consumes more RUs because the query executes against all partitions rather than a single partition.

The SDK handles pagination automatically for cross-partition queries using continuation tokens. For large result sets spanning multiple partitions, the SDK retrieves results in batches and provides continuation tokens to retrieve subsequent pages. This ensures memory efficiency even when querying large datasets across many partitions.

Cross-partition queries are sometimes necessary for administrative queries, reports, or scenarios where filtering by partition key isn’t possible. However, for performance-critical queries or high-throughput scenarios, you should design your partition key strategy to enable single-partition queries whenever possible. The EnableCrossPartitionQuery flag serves as an explicit acknowledgment that you understand the query will be more expensive.

Option A (including partition key in WHERE clause) enables single-partition queries which are more efficient and less expensive. However, this doesn’t address scenarios where you need to query across partitions because you don’t know the partition key or need data from multiple partitions.

Option C (stored procedures for cross-partition operations) are useful for transactional operations but stored procedures in Cosmos DB are scoped to a single partition. They cannot execute across multiple partitions, making them unsuitable for cross-partition queries.

Option D (unlimited container throughput) provides auto-scaling for RU consumption but doesn’t enable or optimize cross-partition queries. While adequate throughput is necessary for any query, it doesn’t address the requirement to enable cross-partition query execution.

Questions 200

You need to implement Azure Application Gateway with Web Application Firewall to protect a web application from SQL injection attacks. What should you configure?

A) Custom WAF rules with SQL injection pattern matching

B) OWASP Core Rule Set with SQL injection protection

C) Backend health probes with security validation

D) URL rewrite rules with input sanitization

Answer: B

Explanation:

OWASP Core Rule Set with SQL injection protection provides comprehensive, industry-standard protection against SQL injection attacks. Azure WAF includes managed rule sets based on OWASP (Open Web Application Security Project) core rules that are regularly updated to protect against emerging threats including SQL injection, cross-site scripting, and other common web vulnerabilities.

The OWASP Core Rule Set (CRS) is enabled by default when you configure WAF on Application Gateway. It includes multiple rules specifically designed to detect and block SQL injection attempts by analyzing request parameters, headers, and body content for malicious SQL patterns. The rules detect common SQL injection techniques like boolean-based injection, time-based blind injection, and union-based injection.

WAF operates in either detection mode or prevention mode. Detection mode logs suspicious requests without blocking them, useful for testing and tuning before enforcement. Prevention mode actively blocks requests that match rule patterns, protecting your application from attacks. You can review WAF logs in Azure Monitor to analyze blocked requests and tune rules as needed.

The managed rule sets receive automatic updates from Microsoft as new vulnerabilities and attack patterns emerge. This ensures your application stays protected against the latest threats without requiring manual rule updates. You can also create custom rules to supplement the OWASP rules for application-specific protection requirements.

Option A (custom WAF rules with SQL injection patterns) requires manually defining and maintaining SQL injection detection patterns. While custom rules have their place for application-specific scenarios, the OWASP Core Rule Set provides comprehensive, tested, and maintained SQL injection protection that would be difficult to replicate manually.

Option B (backend health probes with security validation) monitor the health and availability of backend servers but don’t inspect or filter malicious requests. Health probes ensure traffic only goes to healthy backends but don’t provide WAF protection.

Option D (URL rewrite rules with input sanitization) modify request URLs as they pass through Application Gateway but don’t provide security inspection or blocking capabilities. URL rewriting is for routing and URL manipulation, not security protection against injection attacks.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!